Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp558597pxu; Thu, 3 Dec 2020 07:12:36 -0800 (PST) X-Google-Smtp-Source: ABdhPJx7rOpXd6au+hCUMYKk/OfqEvNZTbhZzYU/F4/+v4x3e8a4MyLsYJMC8NlP2fdT+iSZGy2E X-Received: by 2002:a17:906:1381:: with SMTP id f1mr3095135ejc.87.1607008356364; Thu, 03 Dec 2020 07:12:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607008356; cv=none; d=google.com; s=arc-20160816; b=cI+FMt/0+4nTY9pilAzhB3BSfK+/UnBAZlIs+2QFacTKeD7EqlrFqACOV7svfusPec LzZ3Hz2ZxiBHr+ycU2CNTjQK1irolW24izUdOrfeG25wBPmmdyRLoaWzoYVpX0XjupKa mHV58ruDVY5MtW1Qilc8JLlKIuewMix8rBo3EH1ph99cZhcIhA0qv5Oqrlk/V6lyOOWF /hY7VY2hQR7e6T1LBFHcCB8enI4nUCmCsGeynWJlRSiy3oZxmcy8pes8odRKiuiOYx+k VOgwBpUHpwZnacXGB5URd5MM4MRTfU/L/kGj2zpi28uOd8Py18/i7WqoFgTUaMrUl6R/ 16ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=a1iPrMkb0iX1XvtR+qTACxaYKXWZSPoP+tl7W9BGvHQ=; b=A3SzgUuHZ0EgU4323P169nDHSxC/muGdxxKZEMAvozlxIijdhf0uCY86VUJDul/pjs DR+ZPeseK5mW84BdxyE4sg7X0UtZnpoeIVVYmUQfBVM89zgc1fnW8MRnv3TBtK73SPf7 VDGby2dSLLVuPsPs0qZ2MM3BVl5xxomtZeBwMUyN1z5ekBeHKJ0gT8A7jbzViq6CkiuZ F/pOkpK965IKYOJUGPzgm2WPXutA2r1edhCCVZkYHX+Vuzdxm63LA3Osu/M388bOeMC/ 4mbQykplysg5MYnCVKSJEYro92wvDwqSoJ3G6DThG7sZVXjjOD4359JHNdgESoAneD0u hcJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=ixOggIJM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dx21si1400864ejb.627.2020.12.03.07.12.04; Thu, 03 Dec 2020 07:12:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=ixOggIJM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730972AbgLCPHj (ORCPT + 99 others); Thu, 3 Dec 2020 10:07:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727037AbgLCPHj (ORCPT ); Thu, 3 Dec 2020 10:07:39 -0500 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9FA1C061A4E for ; Thu, 3 Dec 2020 07:06:58 -0800 (PST) Received: by mail-ed1-x542.google.com with SMTP id q16so2407709edv.10 for ; Thu, 03 Dec 2020 07:06:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=a1iPrMkb0iX1XvtR+qTACxaYKXWZSPoP+tl7W9BGvHQ=; b=ixOggIJMdAbM8y8k0hlBNIOBcAoFI4ZlpBsXYeyrhVL583C/wTkfuB56SPFcUtImBF KXPMdYYfsT8cT16Ec8DTWsj2kLWzRo8zxeE9hgzZEwMUv1rFZWG3D2Dn9Iy943jBm7Jb bKt2zpWQSjijAefPwl9y7qUjrbConnEb1o+0OMp3VVWrJEKrQaQy9yY1+Xv8+WMz355x tXGQECjj6RhKuRXQPLSwVRc570dD7Z5QTZEYFhpBRkSVRrxq5k1CDQlLx6VJB6L8StgT jYWz+o4LimGeIWt77Wr2ksR9gT4U+LIgetKKlNtwfi8pWmK8hmU9n+SjjaJ6mCTyg/IY C6og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=a1iPrMkb0iX1XvtR+qTACxaYKXWZSPoP+tl7W9BGvHQ=; b=sfb6ytbYwD+EtiRfj0Ps/gupjyHgxbZIARGha/umilRXqeusU++KfgS8myeuK3ddE3 6TwvcbLu8AWnQhJEzHNSZT1vcLnwRGWE7xGWPliwiX8aYUc3iQxRqvPzZ26zejdjTcrI icXkjiWBR38TinjWV+8DQnoEDaWT9+IpbVUwHh16TKWBVv5Dpx8W7oIhYGFowNc6311f Po4doCKL9yyfaCRbbIycSvrVbsFw8SdBvXPcQfQFx8IQ0TWqj+/USqbVcVBB/mK2VVBw WXcP/ajrLMdeDpoD2BYLihV9SEkPm9PUKjbXzElTQQLj0rF+alSKINsw4VlrM5pvIS0u 884g== X-Gm-Message-State: AOAM533HoMn+0re1Rk1TqIVSsdBBXatqfq1UDd9s0WVhE/9fvJSnHvIX hIFhIrkvcvRsLLrZIYQ8/jSZTTWvx5OOwqXrh/30cg== X-Received: by 2002:a05:6402:456:: with SMTP id p22mr3221593edw.26.1607008017570; Thu, 03 Dec 2020 07:06:57 -0800 (PST) MIME-Version: 1.0 References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-6-pasha.tatashin@soleen.com> <18c9ab89-185b-395a-97ce-31940388df0e@nvidia.com> In-Reply-To: <18c9ab89-185b-395a-97ce-31940388df0e@nvidia.com> From: Pavel Tatashin Date: Thu, 3 Dec 2020 10:06:21 -0500 Message-ID: Subject: Re: [PATCH 5/6] mm: honor PF_MEMALLOC_NOMOVABLE for all allocations To: John Hubbard Cc: LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Jason Gunthorpe , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 3, 2020 at 3:17 AM John Hubbard wrote: > > On 12/1/20 9:23 PM, Pavel Tatashin wrote: > > PF_MEMALLOC_NOMOVABLE is only honored for CMA allocations, extend > > this flag to work for any allocations by removing __GFP_MOVABLE from > > gfp_mask when this flag is passed in the current context, thus > > prohibiting allocations from ZONE_MOVABLE. > > > > Signed-off-by: Pavel Tatashin > > --- > > mm/hugetlb.c | 2 +- > > mm/page_alloc.c | 26 ++++++++++++++++---------- > > 2 files changed, 17 insertions(+), 11 deletions(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 02213c74ed6b..00e786201d8b 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1036,7 +1036,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) > > bool nomovable = !!(current->flags & PF_MEMALLOC_NOMOVABLE); > > > > list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { > > - if (nomovable && is_migrate_cma_page(page)) > > + if (nomovable && is_migrate_movable(get_pageblock_migratetype(page))) > > > I wonder if we should add a helper, like is_migrate_cma_page(), that avoids having > to call get_pageblock_migratetype() at all of the callsites? Good idea, I will add it. > > > > continue; > > > > if (PageHWPoison(page)) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 611799c72da5..7a6d86d0bc5f 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -3766,20 +3766,25 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) > > return alloc_flags; > > } > > > > -static inline unsigned int current_alloc_flags(gfp_t gfp_mask, > > - unsigned int alloc_flags) > > +static inline unsigned int cma_alloc_flags(gfp_t gfp_mask, > > + unsigned int alloc_flags) > > Actually, maybe the original name should be left intact. This handles current alloc > flags, which right now happen to only cover CMA flags, so the original name seems > accurate, right? The reason I re-named it is because we do not access current context anymore, only use gfp_mask to get cma flag. >> - unsigned int pflags = current->flags; So, keeping "current" in the function name makes its intent misleading. Thank you, Pasha