Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1592768pxb; Mon, 22 Feb 2021 06:11:31 -0800 (PST) X-Google-Smtp-Source: ABdhPJz6sg+NwnzCZ21+q9lGUcmz0SBxSLhF3z5eZk8SmBB3taSks/8OXMF4Niq16CLW3E4zOfvw X-Received: by 2002:a17:906:af4e:: with SMTP id ly14mr12088385ejb.55.1614003091703; Mon, 22 Feb 2021 06:11:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614003091; cv=none; d=google.com; s=arc-20160816; b=SE1UL6p9doGcE2j229zfHZmlLdK9ilo5WZYOXAOB2ey7liW1H/yHkih4wTYMYZWgFc Lb5joCZWzolX44V6MGTWD30bfy8ZnJM/2zAoXamwnewN9EfRv9AWCnF92ZNb01vH8pf1 0GQUvIcJTK0RrGWUZMR8F94QFL6H++Vf3r85FI4Y1zrEEyKNa5d+pe5i9h86u54jQsr/ uHuZBzueXnQYSCL/cMtXrVb6gVbTyi2tfzOt9Fp8mkY5c5GWq+dkFSHFccrxUR1vQpXI SZ5BmJsG1H0CB6s2ZlQ25KdWqA7XnR4af3AyW3htCF45QVTEuys2BO26lgpfe/pwALrh e01g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=UsYFp8Cnqw8uzUr+rm32JJa1oe1WOv+Vxg3f+pVdfgU=; b=O3A9AEW4G/ns5Lg0e/ZhGyGQ+JGLWzkr4UbhGkbHEQtEqzX509oFnk2Sdi35RAeV9S FGp6xtk4ZffvcD7WKCKydYAeltf2R5Qybz9sKjssJRF9/yzQ2SaaWlxzFunmcKhyfV7q gY+stN26vkqiriMlfvySpr8zdpZnfmrAjrNvhIJUqyTa5P8UNFACSk0rVpdfjvixrJYg 7gMVQagqC6TL9SzSkgDVVEBCCzQEsub6O1c+QOPIsXqDiGq8yCGFfhMQfQi4Q1xAgzMz J/AcU1YuK6Q89I6q1mKvfemCD4pPw3R+jmnRqUPsj3ay3HW5lGYEM7NwKWdD7NiaMKHF itXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id mm22si12016543ejb.372.2021.02.22.06.10.55; Mon, 22 Feb 2021 06:11:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232157AbhBVOKT (ORCPT + 99 others); Mon, 22 Feb 2021 09:10:19 -0500 Received: from outbound-smtp32.blacknight.com ([81.17.249.64]:57641 "EHLO outbound-smtp32.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230419AbhBVOJt (ORCPT ); Mon, 22 Feb 2021 09:09:49 -0500 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp32.blacknight.com (Postfix) with ESMTPS id D2481BEC46 for ; Mon, 22 Feb 2021 14:08:50 +0000 (GMT) Received: (qmail 26699 invoked from network); 22 Feb 2021 14:08:50 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 22 Feb 2021 14:08:50 -0000 Date: Mon, 22 Feb 2021 14:08:48 +0000 From: Mel Gorman To: Jesper Dangaard Brouer Cc: Chuck Lever , Mel Gorman , Linux NFS Mailing List , "linux-mm@kvack.org" , Jakub Kicinski , "netdev@vger.kernel.org" Subject: Re: alloc_pages_bulk() Message-ID: <20210222140848.GI3697@techsingularity.net> References: <20210210084155.GA3697@techsingularity.net> <20210210124103.56ed1e95@carbon> <20210210130705.GC3629@suse.de> <20210211091235.GC3697@techsingularity.net> <20210211132628.1fe4f10b@carbon> <20210215120056.GD3697@techsingularity.net> <20210215171038.42f62438@carbon> <20210222094256.GH3697@techsingularity.net> <20210222124246.690414a2@carbon> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20210222124246.690414a2@carbon> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Mon, Feb 22, 2021 at 12:42:46PM +0100, Jesper Dangaard Brouer wrote: > On Mon, 22 Feb 2021 09:42:56 +0000 > Mel Gorman wrote: > > > On Mon, Feb 15, 2021 at 05:10:38PM +0100, Jesper Dangaard Brouer wrote: > > > > > > On Mon, 15 Feb 2021 12:00:56 +0000 > > > Mel Gorman wrote: > > > > > > > On Thu, Feb 11, 2021 at 01:26:28PM +0100, Jesper Dangaard Brouer wrote: > > > [...] > > > > > > > > > I also suggest the API can return less pages than requested. Because I > > > > > want to to "exit"/return if it need to go into an expensive code path > > > > > (like buddy allocator or compaction). I'm assuming we have a flags to > > > > > give us this behavior (via gfp_flags or alloc_flags)? > > > > > > > > > > > > > The API returns the number of pages returned on a list so policies > > > > around how aggressive it should be allocating the requested number of > > > > pages could be adjusted without changing the API. Passing in policy > > > > requests via gfp_flags may be problematic as most (all?) bits are > > > > already used. > > > > > > Well, I was just thinking that I would use GFP_ATOMIC instead of > > > GFP_KERNEL to "communicate" that I don't want this call to take too > > > long (like sleeping). I'm not requesting any fancy policy :-) > > > > > > > The NFS use case requires opposite semantics > > -- it really needs those allocations to succeed > > https://lore.kernel.org/r/161340498400.7780.962495219428962117.stgit@klimt.1015granger.net. > > Sorry, but that is not how I understand the code. > > The code is doing exactly what I'm requesting. If the alloc_pages_bulk() > doesn't return expected number of pages, then check if others need to > run. The old code did schedule_timeout(msecs_to_jiffies(500)), while > Chuck's patch change this to ask for cond_resched(). Thus, it tries to > avoid blocking the CPU for too long (when allocating many pages). > > And the nfsd code seems to handle that the code can be interrupted (via > return -EINTR) via signal_pending(current). Thus, the nfsd code seems > to be able to handle if the page allocations failed. > I'm waiting to find out exactly what NFSD is currently doing as the code in 5.11 is not the same as what Chuck was coding against so I'm not 100% certain how it currently works. > > > I've asked what code it's based on as it's not 5.11 and I'll iron that > > out first. > > > > Then it might be clearer what the "can fail" semantics should look like. > > I think it would be best to have pairs of patches where the first patch > > adjusts the semantics of the bulk allocator and the second adds a user. > > That will limit the amount of code code carried in the implementation. > > When the initial users are in place then the implementation can be > > optimised as the optimisations will require significant refactoring and > > I not want to refactor multiple times. > > I guess, I should try to code-up the usage in page_pool. > > What is the latest patch for adding alloc_pages_bulk() ? > There isn't a usable latest version until I reconcile the nfsd caller. The only major change in the API right now is dropping order. It handles order-0 only. -- Mel Gorman SUSE Labs