Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp19260pxf; Tue, 16 Mar 2021 20:42:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxmLA3uCEIb1460y6+q351ooeuP4UTQ5KRbm2zjxBQZIz8Ak8ljAmRAbWBV72dPR0AqukUm X-Received: by 2002:a17:906:bd2:: with SMTP id y18mr33294912ejg.482.1615952555538; Tue, 16 Mar 2021 20:42:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615952555; cv=none; d=google.com; s=arc-20160816; b=xpjT79vRFDZFvae3Cce7QM179PohC59mJpglmP59tJMOqKsGoMuRr4YlDYI8qP4jGK D63oNnEA9A2C3t0ZUBKMzz8mBT4pjclQpapCKBoCacTkbOf/V7EuX/8givFr8nj6TZKz aI9DSmMkp2IJ+CiGhffLkIhoIJuOXbG3K0zCk4hWe3mKxHquwQAo0jSHU9xIOZKup08L oKAssvN+MpokSDAIODmUnhaCspiYjKDHKJTJu9EYKhnS2YJDxNS3andtItp1GC/WSzfZ V+sxMK5GKnonvrM8r6EUpIYw7b2gxzklUKwSB2SmpPEVRI5GCaq6OPnDtTEMJtrN25fJ /gUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=sPNf8cOOD8p1YJwE0ctNzsEISCmgI/BeT9TgW29S5To=; b=jaqseBHc6yRqQH+usqTXSm2Nyo8X+OcBWGN4uWH6wrcbroC8Pwh6s86F6gTBKtVOiL Ic/HGSgOqr1MaW/nKoppHmnENTeissVXvmjCEwax+QKoTRZ8Dpl4hyt/ClOE8LfSrjVC wMCT9D3ejuFW9f+ik3i/9eKmHYJyjGU8QudxZerjBO+TH4s4vw6uPJAEDpFbRsWr/LGd g2gxFpcW0ma9k20p9kcrzuy3/fXMVdMpI2OPbcwOK0r7Av73xYVRn/XQTrtWXHaJY5c1 7Rtrm2a64BO2ab+FpDRIJFOCaKwXmtei+k5IsvRuQ986ReX2AV8yEaZbPPNqO/PgHNY9 Wtbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s19si15680124ejd.449.2021.03.16.20.42.13; Tue, 16 Mar 2021 20:42:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230187AbhCQDlM (ORCPT + 99 others); Tue, 16 Mar 2021 23:41:12 -0400 Received: from mga11.intel.com ([192.55.52.93]:48803 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230080AbhCQDkj (ORCPT ); Tue, 16 Mar 2021 23:40:39 -0400 IronPort-SDR: FjOzzSC6A30yQtuA1qnn1uXF34Xw2XlCXXtg8YSS4sJRtm/CJN0Zb3QIJ+Gc4qKfOV19PJJ7s0 p39XDHiA08HQ== X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="186021888" X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="186021888" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2021 20:40:39 -0700 IronPort-SDR: M381zsLOSRwH5Wrt/ikyk3g4mZ71WgLw/dvoG/CJO4nQbyoMK61+tc44qzB6fm5rXfMlTBziWE FtTWQRmp54KA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="602075916" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by fmsmga006.fm.intel.com with ESMTP; 16 Mar 2021 20:40:36 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , Feng Tang Subject: [PATCH v4 07/13] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Date: Wed, 17 Mar 2021 11:40:04 +0800 Message-Id: <1615952410-36895-8-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1615952410-36895-1-git-send-email-feng.tang@intel.com> References: <1615952410-36895-1-git-send-email-feng.tang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ben Widawsky Begin the real plumbing for handling this new policy. Now that the internal representation for preferred nodes and bound nodes is the same, and we can envision what multiple preferred nodes will behave like, there are obvious places where we can simply reuse the bind behavior. In v1 of this series, the moral equivalent was: "mm: Finish handling MPOL_PREFERRED_MANY". Like that, this attempts to implement the easiest spots for the new policy. Unlike that, this just reuses BIND. Link: https://lore.kernel.org/r/20200630212517.308045-8-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eba207e..d945f29 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -963,8 +963,6 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) switch (p->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - *nodes = p->nodes; - break; case MPOL_PREFERRED_MANY: *nodes = p->nodes; break; @@ -1928,7 +1926,8 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone) nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ - if (unlikely(policy->mode == MPOL_BIND) && + if (unlikely(policy->mode == MPOL_BIND || + policy->mode == MPOL_PREFERRED_MANY) && apply_policy_zone(policy, gfp_zone(gfp)) && cpuset_nodemask_valid_mems_allowed(&policy->nodes)) return &policy->nodes; @@ -1984,7 +1983,6 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { - case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -1994,6 +1992,7 @@ unsigned int mempolicy_slab_node(void) case MPOL_INTERLEAVE: return interleave_nodes(policy); + case MPOL_PREFERRED_MANY: case MPOL_BIND: { struct zoneref *z; @@ -2119,9 +2118,6 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) task_lock(current); mempolicy = current->mempolicy; switch (mempolicy->mode) { - case MPOL_PREFERRED_MANY: - *mask = mempolicy->nodes; - break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); @@ -2132,6 +2128,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: *mask = mempolicy->nodes; break; @@ -2175,12 +2172,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, * Thus, it's possible for tsk to have allocated memory from * nodes in mask. */ - break; - case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; default: @@ -2404,7 +2400,6 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) switch (a->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED_MANY: return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED: @@ -2558,6 +2553,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = first_node(pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: /* Optimize placement among multiple nodes via NUMA balancing */ if (pol->flags & MPOL_F_MORON) { @@ -2580,8 +2576,6 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; - /* case MPOL_PREFERRED_MANY: */ - default: BUG(); } @@ -3094,15 +3088,13 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: break; - case MPOL_PREFERRED_MANY: - WARN_ON(flags & MPOL_F_LOCAL); - fallthrough; case MPOL_PREFERRED: if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else nodes_or(nodes, nodes, pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: case MPOL_INTERLEAVE: nodes = pol->nodes; -- 2.7.4