Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2179281pxj; Thu, 20 May 2021 01:50:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwXf1vhmLw/WUhoTaI3x1/DXW/Y/YKupdm06A5I26XNu9j4Tb8hvClffuoGxVV/IrteJFYl X-Received: by 2002:a17:907:2167:: with SMTP id rl7mr2262657ejb.171.1621500588927; Thu, 20 May 2021 01:49:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621500588; cv=none; d=google.com; s=arc-20160816; b=EB/HvmzGT5dIF/fUHILTHswr4gQOdlkknzO7DnIkNjv7a2BUmsoax3I/JgM9Jjd1S6 LamTEyEB4YERSxxy/lzr1o9tyGqUpDlw0WUDklRb9GY6X5lkJwbx4zalubUK/vcojsPA dR0Yq5An5864tWZNOOBaPazrc85uqWNV2i2qp6oM0lKpoPEOqXFS7f8KHA5NqJslhy2A rcIEZb5d3DDZuDV4Y/kb9Q2y41SxKfZfA6SZzkva8Y5laX70FxoA6dnOXbnZx3RxepKt MBswFFk88C2FFBXjvKhiY1R2W0IDgclsZJGnZdZXI5RzHUVTaFOaxrV+mlrdEb4VD1hh OiUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=JIgMtLGLg2RQvvCVczXVhcQcwS3Gt2jQKgAdVDVxYmU=; b=zqGVOao8CHgIoXywMlNL5FH3xfErJiPb6209ntteET0RtLyckFql3oPfMSRhto32DD n1ZDMZQxg2ygdRgaBYiHIfh4lfBtzt/9I8+0hqi4sfHCJY3u5tZkGQUp/tkGhEjjILRv /RpQX96T4smSxcQZJYznABYjJCYiXyl3yVIxMxLP3O1gbHEmmBEqHKin6kMQyBb+Yv00 xiJwN3dZzx/xAeDcW/+WFj3WLQK8MAPcdUbC3MhYp1TtC/1Vd9ahWf1xuMpO84lIWuKE FYOIYvEaDCxvtLp9uELYSkMI3iZOv9wlH5ZCIBBOWvoPY9iRm/gjrAT2Q/UKCfN98Wj8 gVBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ak17si2645699ejc.442.2021.05.20.01.49.23; Thu, 20 May 2021 01:49:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231256AbhETIcF (ORCPT + 99 others); Thu, 20 May 2021 04:32:05 -0400 Received: from mga11.intel.com ([192.55.52.93]:61362 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231235AbhETIb7 (ORCPT ); Thu, 20 May 2021 04:31:59 -0400 IronPort-SDR: WKEQHHgFYdojt0J5hCKfNxgbi47gPKlc5N/vWPn9Rdr/mKlIlF2mA0P5rf2OyjqsBgknkNPHcX evEene0s+GdA== X-IronPort-AV: E=McAfee;i="6200,9189,9989"; a="198088192" X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="198088192" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 01:30:28 -0700 IronPort-SDR: jZghz6j8L+alCAOmUvkei6YyF/cUcyqTBf464g0BsJe2lQU0ZHH+F+HiBLo0c/tzG8n3xG2TFj N4KdxVtUjMDw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="473899361" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by orsmga001.jf.intel.com with ESMTP; 20 May 2021 01:30:22 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Michal Hocko Cc: Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , ying.huang@intel.com, Feng Tang Subject: [RFC Patch v2 4/4] mm/mempolicy: kill MPOL_F_LOCAL bit Date: Thu, 20 May 2021 16:30:04 +0800 Message-Id: <1621499404-67756-5-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1621499404-67756-1-git-send-email-feng.tang@intel.com> References: <1621499404-67756-1-git-send-email-feng.tang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now the only remaining case of a real 'local' policy faked by 'prefer' policy plus MPOL_F_LOCAL bit is: A valid 'prefer' policy with a valid 'preferred' node is 'rebind' to a nodemask which doesn't contains the 'preferred' node, then it will handle allocation with 'local' policy. Add a new 'MPOL_F_LOCAL_TEMP' bit for this case, and kill the MPOL_F_LOCAL bit, which could simplify the code much. Reviewed-by: Andi Kleen Signed-off-by: Feng Tang --- include/uapi/linux/mempolicy.h | 1 + mm/mempolicy.c | 77 +++++++++++++++++++++++------------------- 2 files changed, 43 insertions(+), 35 deletions(-) diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 4832fd0..942844a 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -63,6 +63,7 @@ enum { #define MPOL_F_LOCAL (1 << 1) /* preferred local allocation */ #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */ #define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */ +#define MPOL_F_LOCAL_TEMP (1 << 5) /* a policy temporarily changed from 'prefer' to 'local' */ /* * These bit locations are exposed in the vm.zone_reclaim_mode sysctl diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 833ed2d..53a480f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -332,6 +332,22 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) pol->v.nodes = tmp; } +static void mpol_rebind_local(struct mempolicy *pol, + const nodemask_t *nodes) +{ + if (unlikely(pol->flags & MPOL_F_STATIC_NODES)) { + int node = first_node(pol->w.user_nodemask); + + BUG_ON(!(pol->flags & MPOL_F_LOCAL_TEMP)); + + if (node_isset(node, *nodes)) { + pol->v.preferred_node = node; + pol->mode = MPOL_PREFERRED; + pol->flags &= ~MPOL_F_LOCAL_TEMP; + } + } +} + static void mpol_rebind_preferred(struct mempolicy *pol, const nodemask_t *nodes) { @@ -342,13 +358,19 @@ static void mpol_rebind_preferred(struct mempolicy *pol, if (node_isset(node, *nodes)) { pol->v.preferred_node = node; - pol->flags &= ~MPOL_F_LOCAL; - } else - pol->flags |= MPOL_F_LOCAL; + } else { + /* + * If there is no valid node, change the mode to + * MPOL_LOCAL, which will be restored back when + * next rebind() see a valid node. + */ + pol->mode = MPOL_LOCAL; + pol->flags |= MPOL_F_LOCAL_TEMP; + } } else if (pol->flags & MPOL_F_RELATIVE_NODES) { mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); pol->v.preferred_node = first_node(tmp); - } else if (!(pol->flags & MPOL_F_LOCAL)) { + } else { pol->v.preferred_node = node_remap(pol->v.preferred_node, pol->w.cpuset_mems_allowed, *nodes); @@ -367,7 +389,7 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask) { if (!pol) return; - if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) && + if (!mpol_store_user_nodemask(pol) && nodes_equal(pol->w.cpuset_mems_allowed, *newmask)) return; @@ -419,7 +441,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .rebind = mpol_rebind_nodemask, }, [MPOL_LOCAL] = { - .rebind = mpol_rebind_default, + .rebind = mpol_rebind_local, }, }; @@ -913,10 +935,12 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) case MPOL_INTERLEAVE: *nodes = p->v.nodes; break; + case MPOL_LOCAL: + /* return empty node mask for local allocation */ + break; + case MPOL_PREFERRED: - if (!(p->flags & MPOL_F_LOCAL)) - node_set(p->v.preferred_node, *nodes); - /* else return empty node mask for local allocation */ + node_set(p->v.preferred_node, *nodes); break; default: BUG(); @@ -1880,9 +1904,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) /* Return the node id preferred by the given mempolicy, or the given id */ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) { - if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) + if (policy->mode == MPOL_PREFERRED) { nd = policy->v.preferred_node; - else { + } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy * because we might easily break the expectation to stay on the @@ -1919,14 +1943,11 @@ unsigned int mempolicy_slab_node(void) return node; policy = current->mempolicy; - if (!policy || policy->flags & MPOL_F_LOCAL) + if (!policy) return node; switch (policy->mode) { case MPOL_PREFERRED: - /* - * handled MPOL_F_LOCAL above - */ return policy->v.preferred_node; case MPOL_INTERLEAVE: @@ -2060,16 +2081,13 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) mempolicy = current->mempolicy; switch (mempolicy->mode) { case MPOL_PREFERRED: - if (mempolicy->flags & MPOL_F_LOCAL) - nid = numa_node_id(); - else - nid = mempolicy->v.preferred_node; + nid = mempolicy->v.preferred_node; init_nodemask_of_node(mask, nid); break; case MPOL_BIND: case MPOL_INTERLEAVE: - *mask = mempolicy->v.nodes; + *mask = mempolicy->v.nodes; break; case MPOL_LOCAL: @@ -2181,7 +2199,7 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * If the policy is interleave, or does not allow the current * node in its nodemask, we allocate the standard way. */ - if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) + if (pol->mode == MPOL_PREFERRED) hpage_node = pol->v.preferred_node; nmask = policy_nodemask(gfp, pol); @@ -2317,9 +2335,6 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) case MPOL_INTERLEAVE: return !!nodes_equal(a->v.nodes, b->v.nodes); case MPOL_PREFERRED: - /* a's ->flags is the same as b's */ - if (a->flags & MPOL_F_LOCAL) - return true; return a->v.preferred_node == b->v.preferred_node; case MPOL_LOCAL: return true; @@ -2460,10 +2475,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long break; case MPOL_PREFERRED: - if (pol->flags & MPOL_F_LOCAL) - polnid = numa_node_id(); - else - polnid = pol->v.preferred_node; + polnid = pol->v.preferred_node; break; case MPOL_LOCAL: @@ -2834,9 +2846,6 @@ void numa_default_policy(void) * Parse and format mempolicy from/to strings */ -/* - * "local" is implemented internally by MPOL_PREFERRED with MPOL_F_LOCAL flag. - */ static const char * const policy_modes[] = { [MPOL_DEFAULT] = "default", @@ -3003,12 +3012,10 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: + case MPOL_LOCAL: break; case MPOL_PREFERRED: - if (flags & MPOL_F_LOCAL) - mode = MPOL_LOCAL; - else - node_set(pol->v.preferred_node, nodes); + node_set(pol->v.preferred_node, nodes); break; case MPOL_BIND: case MPOL_INTERLEAVE: -- 2.7.4