Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp5629210rdb; Wed, 13 Dec 2023 14:42:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IHArea+qEi+RXXJi3CdZ0nfYtJE3Saw3clvE/d67EMek6HJaOzXUPcfTZze6gW4eA54kPMv X-Received: by 2002:a17:902:b197:b0:1d0:6ffd:6e66 with SMTP id s23-20020a170902b19700b001d06ffd6e66mr4774806plr.94.1702507350461; Wed, 13 Dec 2023 14:42:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702507350; cv=none; d=google.com; s=arc-20160816; b=O3Wh+GCUZAuc3pBAeADvsO+vSdtBA/kP0iMeljMoplCV1qaWoEcEwwISHnAUB8OOPq rwhETVQspCSHn/6i5ux86O8Nr+uWpqdrcPbzC3xsrmJctWQXL7o/eXbiu+z9xEp5wCFE 7/OSb1EGNTF72eMkHuZMkoAeHa3RH7IeUSmppNdAhM73Kj/YDOhAAjroqHJFT+XSM6c7 7UnuxJyiKyFpGsuKBoF5uqzWNT69F3KEGJavle4za63g/ojvo5ZBx5wdyiTolSVlyZ29 dmrM4FMdEf/J3chSdvUPazWKq8sQXVDiVna8Le5YsyrK0/oQ5p0Grmx9MiZhrZipEmOp e1Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FfMI3UqrzvxDBeW3bOr4THS2M0DwKsJGjO9SMlrMd7s=; fh=tBzJ0NKB+DWbP/kk9rYMQt7EghJKLCiF1I+bTxOTDlg=; b=SbkEykRmzqgWx8jx7DpAGI2fBAlYLzQN1Vqda/q68oqciL2ypzttTjtzP27eYfDdaG FOzCCRodMAaYLicvf0IUkm6H6ALHRQNbjahFM+CtRY+Ri3w9u8CLXvhXXqYU/Vdr2vJ1 S/NpYwssRSx/4sXTGnS6rQVu6fMfGlMvZxI4EPbJV5lCmXmomT9GGrROyGlxaqhkKjII zsbR9ETVi9ioMMh1GueT+IsZECCNq7ImT5qj+40jWqw6kI2U7pZSz8y7LqJdaGXjDhdD loD50pZmQ0zvNHLli4H0l/Yp1kfupzZNil1AHz1ut8p4cWapZ6SDtGYhCH6bm9SX+VYj ZwLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Wl05bh+I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id j6-20020a170902da8600b001d35645b662si1162101plx.435.2023.12.13.14.42.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 14:42:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Wl05bh+I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 62F418033DD7; Wed, 13 Dec 2023 14:42:19 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442841AbjLMWlj (ORCPT + 99 others); Wed, 13 Dec 2023 17:41:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442836AbjLMWlY (ORCPT ); Wed, 13 Dec 2023 17:41:24 -0500 Received: from mail-yb1-xb41.google.com (mail-yb1-xb41.google.com [IPv6:2607:f8b0:4864:20::b41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AE2DB7; Wed, 13 Dec 2023 14:41:30 -0800 (PST) Received: by mail-yb1-xb41.google.com with SMTP id 3f1490d57ef6-dbcd348f64cso1054517276.2; Wed, 13 Dec 2023 14:41:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702507289; x=1703112089; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FfMI3UqrzvxDBeW3bOr4THS2M0DwKsJGjO9SMlrMd7s=; b=Wl05bh+IDEYwJRmOAkHE6lSvm50AwyNiuIgQgv4HuQydvwAD8c9MiB9XeM5NDxdHCt OqK8sfcpAgvqkdk5yh6c3uamQNwL31od1zlCT2YPcDAe2/hXmfTla80nRva/LMCL5qlv kLr56gG7JDDQwCRXeJ/zCMJeEBcIhHV/VNqOtAsARV1r+E31qOdkQYVVLPO39F6BpYUL p4iN477goSn0shjtJ9vnnDH2ffO1HDduwEI8O3anQwVglp4RChCLyiI9DUGIsAZXRNR+ 7p87nAM/7vclkIJtVJ9dInZ/0kZfdaCxdUa1hO3jDI0rfNEagxHYw9Jxmc79XZ7QNOPw Fxfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702507289; x=1703112089; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FfMI3UqrzvxDBeW3bOr4THS2M0DwKsJGjO9SMlrMd7s=; b=osNe2G/ZsMa1soxUwfwxOGEd5kdaZHiL1zoaYQaNx5WysjDlQPkfDTTr5IiWnGp+mS ic2RyoL5OOgKJlGY91dS5ro4Boq17JwZrV6LIZO9YlfH8M08Xqb8SwR+4q5qCYCiB5BV B2yaEjDkrxazbY9U+ged9JN9sZDrYmJeuF+cExUGKH8xqeHm3R/N4syzofdib9y3c2mR UKPX9KIKovFwd2P9hLH9/S1uIwpUTyqBZLyXzbJbbSKk6vl1h7uZ6Fj0x/oc0/QM1fkI RkEFb65Msdh/eXWJ1fxyzvunkLLBRcauW3e97QMFsWQWJRgsyU/BVHdiqicuJrJ9ALh3 MOGg== X-Gm-Message-State: AOJu0YzO1htpPVi1k/NFqaEWo8qMUZ4jz7ZDZEKn4Ri2Dc3m4tgDPcGN xBE63JHMWAK16gi+SykUpw== X-Received: by 2002:a81:914d:0:b0:5e1:e4a6:db95 with SMTP id i74-20020a81914d000000b005e1e4a6db95mr3450276ywg.40.1702507289410; Wed, 13 Dec 2023 14:41:29 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id v4-20020a818504000000b005d9729068f5sm5050583ywf.42.2023.12.13.14.41.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 14:41:29 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com Subject: [PATCH v3 04/11] mm/mempolicy: create struct mempolicy_args for creating new mempolicies Date: Wed, 13 Dec 2023 17:41:11 -0500 Message-Id: <20231213224118.1949-5-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231213224118.1949-1-gregory.price@memverge.com> References: <20231213224118.1949-1-gregory.price@memverge.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Wed, 13 Dec 2023 14:42:19 -0800 (PST) This patch adds a new kernel structure `struct mempolicy_args`, intended to be used for an extensible get/set_mempolicy interface. This implements the fields required to support the existing syscall interfaces interfaces, but does not expose any user-facing arg structure. mpol_new is refactored to take the argument structure so that future mempolicy extensions can all be managed in the mempolicy constructor. The get_mempolicy and mbind syscalls are refactored to utilize the new argument structure, as are all the callers of mpol_new() and do_set_mempolicy. Signed-off-by: Gregory Price --- include/linux/mempolicy.h | 12 +++++++ mm/mempolicy.c | 69 +++++++++++++++++++++++++++++---------- 2 files changed, 63 insertions(+), 18 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index ba09167e80f7..aeac19dfc2b6 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -61,6 +61,18 @@ struct mempolicy { } wil; }; +/* + * Describes settings of a mempolicy during set/get syscalls and + * kernel internal calls to do_set_mempolicy() + */ +struct mempolicy_args { + unsigned short mode; /* policy mode */ + unsigned short mode_flags; /* policy mode flags */ + int home_node; /* mbind: use MPOL_MF_HOME_NODE */ + nodemask_t *policy_nodes; /* get/set/mbind */ + int policy_node; /* get: policy node information */ +}; + /* * Support for managing mempolicy data objects (clone, copy, destroy) * The default fast path of a NULL MPOL_DEFAULT policy is always inlined. diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e55a8fa13e45..a2353b208af7 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -265,10 +265,12 @@ static int mpol_set_nodemask(struct mempolicy *pol, * This function just creates a new policy, does some check and simple * initialization. You must invoke mpol_set_nodemask() to set nodes. */ -static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, - nodemask_t *nodes) +static struct mempolicy *mpol_new(struct mempolicy_args *args) { struct mempolicy *policy; + unsigned short mode = args->mode; + unsigned short flags = args->mode_flags; + nodemask_t *nodes = args->policy_nodes; if (mode == MPOL_DEFAULT) { if (nodes && !nodes_empty(*nodes)) @@ -817,8 +819,7 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma, } /* Set the process memory policy */ -static long do_set_mempolicy(unsigned short mode, unsigned short flags, - nodemask_t *nodes) +static long do_set_mempolicy(struct mempolicy_args *args) { struct mempolicy *new, *old; NODEMASK_SCRATCH(scratch); @@ -827,14 +828,14 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, if (!scratch) return -ENOMEM; - new = mpol_new(mode, flags, nodes); + new = mpol_new(args); if (IS_ERR(new)) { ret = PTR_ERR(new); goto out; } task_lock(current); - ret = mpol_set_nodemask(new, nodes, scratch); + ret = mpol_set_nodemask(new, args->policy_nodes, scratch); if (ret) { task_unlock(current); mpol_put(new); @@ -1232,8 +1233,7 @@ static struct folio *alloc_migration_target_by_mpol(struct folio *src, #endif static long do_mbind(unsigned long start, unsigned long len, - unsigned short mode, unsigned short mode_flags, - nodemask_t *nmask, unsigned long flags) + struct mempolicy_args *margs, unsigned long flags) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; @@ -1253,7 +1253,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (start & ~PAGE_MASK) return -EINVAL; - if (mode == MPOL_DEFAULT) + if (margs->mode == MPOL_DEFAULT) flags &= ~MPOL_MF_STRICT; len = PAGE_ALIGN(len); @@ -1264,7 +1264,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (end == start) return 0; - new = mpol_new(mode, mode_flags, nmask); + new = mpol_new(margs); if (IS_ERR(new)) return PTR_ERR(new); @@ -1281,7 +1281,8 @@ static long do_mbind(unsigned long start, unsigned long len, NODEMASK_SCRATCH(scratch); if (scratch) { mmap_write_lock(mm); - err = mpol_set_nodemask(new, nmask, scratch); + err = mpol_set_nodemask(new, margs->policy_nodes, + scratch); if (err) mmap_write_unlock(mm); } else @@ -1295,7 +1296,7 @@ static long do_mbind(unsigned long start, unsigned long len, * Lock the VMAs before scanning for pages to migrate, * to ensure we don't miss a concurrently inserted page. */ - nr_failed = queue_pages_range(mm, start, end, nmask, + nr_failed = queue_pages_range(mm, start, end, margs->policy_nodes, flags | MPOL_MF_INVERT | MPOL_MF_WRLOCK, &pagelist); if (nr_failed < 0) { @@ -1500,6 +1501,7 @@ static long kernel_mbind(unsigned long start, unsigned long len, unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode, unsigned int flags) { + struct mempolicy_args margs; unsigned short mode_flags; nodemask_t nodes; int lmode = mode; @@ -1514,7 +1516,12 @@ static long kernel_mbind(unsigned long start, unsigned long len, if (err) return err; - return do_mbind(start, len, lmode, mode_flags, &nodes, flags); + memset(&margs, 0, sizeof(margs)); + margs.mode = lmode; + margs.mode_flags = mode_flags; + margs.policy_nodes = &nodes; + + return do_mbind(start, len, &margs, flags); } SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, len, @@ -1595,6 +1602,7 @@ SYSCALL_DEFINE6(mbind, unsigned long, start, unsigned long, len, static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, unsigned long maxnode) { + struct mempolicy_args args; unsigned short mode_flags; nodemask_t nodes; int lmode = mode; @@ -1608,7 +1616,12 @@ static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, if (err) return err; - return do_set_mempolicy(lmode, mode_flags, &nodes); + memset(&args, 0, sizeof(args)); + args.mode = lmode; + args.mode_flags = mode_flags; + args.policy_nodes = &nodes; + + return do_set_mempolicy(&args); } SYSCALL_DEFINE3(set_mempolicy, int, mode, const unsigned long __user *, nmask, @@ -2890,6 +2903,7 @@ static int shared_policy_replace(struct shared_policy *sp, pgoff_t start, void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) { int ret; + struct mempolicy_args margs; sp->root = RB_ROOT; /* empty tree == default mempolicy */ rwlock_init(&sp->lock); @@ -2902,8 +2916,12 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) if (!scratch) goto put_mpol; + memset(&margs, 0, sizeof(margs)); + margs.mode = mpol->mode; + margs.mode_flags = mpol->flags; + margs.policy_nodes = &mpol->w.user_nodemask; /* contextualize the tmpfs mount point mempolicy to this file */ - npol = mpol_new(mpol->mode, mpol->flags, &mpol->w.user_nodemask); + npol = mpol_new(&margs); if (IS_ERR(npol)) goto free_scratch; /* no valid nodemask intersection */ @@ -3011,6 +3029,7 @@ static inline void __init check_numabalancing_enable(void) void __init numa_policy_init(void) { + struct mempolicy_args args; nodemask_t interleave_nodes; unsigned long largest = 0; int nid, prefer = 0; @@ -3056,7 +3075,11 @@ void __init numa_policy_init(void) if (unlikely(nodes_empty(interleave_nodes))) node_set(prefer, interleave_nodes); - if (do_set_mempolicy(MPOL_INTERLEAVE, 0, &interleave_nodes)) + memset(&args, 0, sizeof(args)); + args.mode = MPOL_INTERLEAVE; + args.policy_nodes = &interleave_nodes; + + if (do_set_mempolicy(&args)) pr_err("%s: interleaving failed\n", __func__); check_numabalancing_enable(); @@ -3065,7 +3088,12 @@ void __init numa_policy_init(void) /* Reset policy of current process to default */ void numa_default_policy(void) { - do_set_mempolicy(MPOL_DEFAULT, 0, NULL); + struct mempolicy_args args; + + memset(&args, 0, sizeof(args)); + args.mode = MPOL_DEFAULT; + + do_set_mempolicy(&args); } /* @@ -3095,6 +3123,7 @@ static const char * const policy_modes[] = */ int mpol_parse_str(char *str, struct mempolicy **mpol) { + struct mempolicy_args margs; struct mempolicy *new = NULL; unsigned short mode_flags; nodemask_t nodes; @@ -3181,7 +3210,11 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) goto out; } - new = mpol_new(mode, mode_flags, &nodes); + memset(&margs, 0, sizeof(margs)); + margs.mode = mode; + margs.mode_flags = mode_flags; + margs.policy_nodes = &nodes; + new = mpol_new(&margs); if (IS_ERR(new)) goto out; -- 2.39.1