Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp19310pxf; Tue, 16 Mar 2021 20:42:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxcHKq30UAHbPargsyYDEATBu2pMs+KESWyMemtk1e04ACaZrqVEJh5Pf+qeq0T8hWFzep5 X-Received: by 2002:a05:6402:3550:: with SMTP id f16mr39238506edd.134.1615952565195; Tue, 16 Mar 2021 20:42:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615952565; cv=none; d=google.com; s=arc-20160816; b=iA7tB76+enXpZWGjEuz7qHBjU5TuOKjTCuAc3qhi0vUrDidg5aPIVHKFvqIvVWxsXm uP706A2+Ge5ru4pL6zroif/waHIJ/WaIiuQfPvcNLJtH0VPaH4sTqJn/QIHXDWxAFRMM K39S3Wdbvu1Vg5WgZySF4zQ4z4ldAL6rL2I3eDGyvwuhD7j8UaJCncQhqrlOKHGwc2Y0 C4ZyGFVZNtRIVdpwwm3sjxaqvfjQqhbge2l9PtBg2BgKRucGDdYtNapeZGzqo26AZlXd 9ltFHP/U/zhtruNJEFOkRNvmmtRbyZXM7+xlQMaicKMBUUGRyZymwZTjay7rrrb87KmO cxUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=gLw6YmJzQfhiLimlH/tGBg9PMpIjmBe2yovYWgwKxVg=; b=hQzRljKfd+is+TWTwO/838bpqvKmT3f9wTaPIlDif8kk00ctgBVhTi7kXD4YFYqIZe Qi5mVt424bWgFqfbCKxCbfc0xNv1mhaNbdbOhIqO/NhCtCeKu/EcS4eKDxIpU9RXPfpp enie4xycPaj4gCkp1snwtJEtze8kJE+kE3z05/EtJFUAW7gsHwxJfHapZDqpIUbpCxLd aB72pbcuuSjcnA1SE5eUVCua6vekfL+3LyI1ZAiTlOBKEEnYY1DltPK2HrclaPPNI+k3 UJF/9z8tqVKYLIn6ds/XOLQ69YNyz0qIUTV4nxn7JgJjZrr63GrLESrSGFWBwLUphFae SfAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o16si15527633ejb.259.2021.03.16.20.42.23; Tue, 16 Mar 2021 20:42:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230250AbhCQDlO (ORCPT + 99 others); Tue, 16 Mar 2021 23:41:14 -0400 Received: from mga11.intel.com ([192.55.52.93]:48803 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229871AbhCQDkr (ORCPT ); Tue, 16 Mar 2021 23:40:47 -0400 IronPort-SDR: uazbN431MJ7bh0bm7N4JtC7QDOhW4IBQqh5G3rOOsc35P6QqGzHTR+VR7r6AdJ+wtPmlgMq1LZ VOQUSqPWfrvA== X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="186021897" X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="186021897" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2021 20:40:46 -0700 IronPort-SDR: N0mWVmdNP6PLiKMb/zxRwfM4pv66INHYF1GGnDSe31OfPH58fv9fmUHhvXjVCo29K5qJ7nUfNb nXCDwaZCMCkQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="602075961" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by fmsmga006.fm.intel.com with ESMTP; 16 Mar 2021 20:40:43 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , Feng Tang Subject: [PATCH v4 09/13] mm/mempolicy: Thread allocation for many preferred Date: Wed, 17 Mar 2021 11:40:06 +0800 Message-Id: <1615952410-36895-10-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1615952410-36895-1-git-send-email-feng.tang@intel.com> References: <1615952410-36895-1-git-send-email-feng.tang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ben Widawsky In order to support MPOL_PREFERRED_MANY as the mode used by set_mempolicy(2), alloc_pages_current() needs to support it. This patch does that by using the new helper function to allocate properly based on policy. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d21105b..a92efe7 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2357,7 +2357,7 @@ EXPORT_SYMBOL(alloc_pages_vma); struct page *alloc_pages_current(gfp_t gfp, unsigned order) { struct mempolicy *pol = &default_policy; - struct page *page; + int nid = NUMA_NO_NODE; if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); @@ -2367,14 +2367,9 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_pages_policy(pol, gfp, order, - interleave_nodes(pol)); - else - page = __alloc_pages_nodemask(gfp, order, - policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol)); + nid = interleave_nodes(pol); - return page; + return alloc_pages_policy(pol, gfp, order, nid); } EXPORT_SYMBOL(alloc_pages_current); -- 2.7.4