Received: by 2002:a05:6a10:c7c6:0:0:0:0 with SMTP id h6csp2184269pxy; Mon, 2 Aug 2021 23:00:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy2osXDSW7QkOp3L670sDQW/27bCQFOqXbe9CIuc4+7rnjhvMrMAfHagNGXaGhZj0Qyid6S X-Received: by 2002:a92:5205:: with SMTP id g5mr243732ilb.22.1627970447119; Mon, 02 Aug 2021 23:00:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627970447; cv=none; d=google.com; s=arc-20160816; b=BrM34zbKzhAXvyuOrt+hyEF5oYaSyQirha0UWD/n0Qhy4nvAbWmGRZktxjlmHoRPLb GzA55ir8Xb5eETkDaae8epS6xMzJtwQZ/8B7mfHQZr39hMduyeZkGfjd8oP9+zQp/K/v 4eH/cueE66SJ4BjN1RnzS8hJtWnIeDI6FHi6bok0WehP+EflJNwhfjBHw2LgBlpv1bBX iVIrTC5GDORPQZkvWUmzvBMSW5fFTVVUJqVLfFgzmdPOBf2IoVC9GlO2eHZTPFkv9AoN r4osVzL/xNhqA+HTRRKRHdXA6RdinWJJTIik0y4Y4nfhQmTOikta5bSc+4XntTHZB34K Bulw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=sEoUEBRlH62jo1RsdYtIPyDMx1TpCV5FP24180tr7rg=; b=smIavpHimLMYQDQ4XHCuwGFlI7Z87+zanTG25uk3JIhfZgSnIKFYGauNoqxKET2eAS OWAiSqb4SUKMaXibQq+LCnhN4xclVAtuIRlRhVxrYYaNIqkFKVkG/kDqqOTCFLEUtLZ+ Njc1mwVEe34J99MNhwPiluAGJ8RVY5CP0tmVlnPtOplcbB+Dz+mlPCMHKrAAjTOGYSUb Xmf7mBmMKg0WfQ120A/kztrZPwZP3Jz9W7n7LLb1ZT/Pe7mjucYYyUxNa/HJndvcCk5l IPNPSk1kK6GUePeynf2i4qJvVmirh6+7uw6SShH/MNOlDhAnFh24cPG19ab6LIvBmZoq V4fA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t8si16302321ilu.144.2021.08.02.23.00.35; Mon, 02 Aug 2021 23:00:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233962AbhHCF7w (ORCPT + 99 others); Tue, 3 Aug 2021 01:59:52 -0400 Received: from mga01.intel.com ([192.55.52.88]:34943 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234020AbhHCF7u (ORCPT ); Tue, 3 Aug 2021 01:59:50 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10064"; a="235529253" X-IronPort-AV: E=Sophos;i="5.84,291,1620716400"; d="scan'208";a="235529253" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2021 22:59:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,291,1620716400"; d="scan'208";a="479233364" Received: from shbuild999.sh.intel.com ([10.239.146.151]) by fmsmga008.fm.intel.com with ESMTP; 02 Aug 2021 22:59:35 -0700 From: Feng Tang To: linux-mm@kvack.org, Andrew Morton , Michal Hocko , David Rientjes , Dave Hansen , Ben Widawsky Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Andrea Arcangeli , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Andi Kleen , Dan Williams , ying.huang@intel.com, Feng Tang Subject: [PATCH v7 3/5] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY Date: Tue, 3 Aug 2021 13:59:20 +0800 Message-Id: <1627970362-61305-4-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1627970362-61305-1-git-send-email-feng.tang@intel.com> References: <1627970362-61305-1-git-send-email-feng.tang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ben Widawsky Implement the missing huge page allocation functionality while obeying the preferred node semantics. This is similar to the implementation for general page allocation, as it uses a fallback mechanism to try multiple preferred nodes first, and then all other nodes. [akpm: fix compling issue when merging with other hugetlb patch] [Thanks to 0day bot for catching the missing #ifdef CONFIG_NUMA issue] Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com Suggested-by: Michal Hocko Signed-off-by: Ben Widawsky Co-developed-by: Feng Tang Signed-off-by: Feng Tang --- mm/hugetlb.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 95714fb28150..9279f6d478d9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1166,7 +1166,20 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); +#ifdef CONFIG_NUMA + if (mpol->mode == MPOL_PREFERRED_MANY) { + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + if (page) + goto check_reserve; + /* Fallback to all nodes */ + nodemask = NULL; + } +#endif page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + +#ifdef CONFIG_NUMA +check_reserve: +#endif if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { SetHPageRestoreReserve(page); h->resv_huge_pages--; @@ -2147,6 +2160,21 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, nodemask_t *nodemask; nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); +#ifdef CONFIG_NUMA + if (mpol->mode == MPOL_PREFERRED_MANY) { + gfp_t gfp = gfp_mask | __GFP_NOWARN; + + gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); + page = alloc_surplus_huge_page(h, gfp, nid, nodemask, false); + if (page) { + mpol_cond_put(mpol); + return page; + } + + /* Fallback to all nodes */ + nodemask = NULL; + } +#endif page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask, false); mpol_cond_put(mpol); -- 2.14.1