Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2DDBC6FA8E for ; Thu, 2 Mar 2023 14:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229714AbjCBO06 (ORCPT ); Thu, 2 Mar 2023 09:26:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229501AbjCBO0z (ORCPT ); Thu, 2 Mar 2023 09:26:55 -0500 Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com [IPv6:2607:f8b0:4864:20::f32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC753457FF for ; Thu, 2 Mar 2023 06:26:54 -0800 (PST) Received: by mail-qv1-xf32.google.com with SMTP id bo10so11714469qvb.12 for ; Thu, 02 Mar 2023 06:26:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=4syLPvnUliFAB7Ucku786LYGOKX0G187dZUp8aM8GCE=; b=tMzDhXz9XiSvrraYBZ5vg81vqX/2Ei96b99h6vbLIXurOBhpA4p0J1DO3KZtgxUon7 JiOC7usX6xidgZKjdJTgra3t0vvZV2bmF7E6uZXEx3Q/z94+KdVJDnenPsztRmzlwo7y j3GdyMZWG0o3tsPeuquyxB8LvmWlwLLL+dYJEEHl5Uf76LHDziLYdp4R00uvYbjKZ1UB To2srwOjxjjXCP8/ZwchBDD1FLFPuGpKjwKp4ltbX4RG553kyDwFtQf9PoR1AOL0GqRD Eg0z60u9VhGqu1Gc4UpfzZ/52EWUq9d8rqIT5ZgZ7fppb7/JOwzXIws042gFpwPHjZF4 4FrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4syLPvnUliFAB7Ucku786LYGOKX0G187dZUp8aM8GCE=; b=eAHYX8ncOZMf60HkS4I2cxy8Q8+S0DiShltHc7jKp/YRPYdO2tO02p3UHEiwn4q9vK 1sGfr9yix3jDF9loPDLuCCncVBglA1K+tpgdHkYCTCLXsUsrDR30f1EXYe0anIGHeTp5 sknMQVzWlF0/fxeb3w8mVPjTDY7Annv/vRrkVexVCnQsxKfPSTLtHqGY+fYJaG+x2AGX 6w1CTmbVuIvuR6D5XiGHKljJWMaz/9rwJy52NHyOyJeSOiVQ7g/QBgHT/5U4jDh44ad5 n7pqtLIp4MtIJ+eKUBmkHLKDkjFyXbHTD+B6uwvMrHnqd22rxuac9jz1D940LCL/V2hU R71Q== X-Gm-Message-State: AO0yUKXgFhQCLFxczNRcEp3PQYZr3HllYBUe4Pv49SL0GL3QINjClWtz g/xwiVq9LjsLFy+dRj4E90VqRZvn3Ij8NWIUJl596Q== X-Google-Smtp-Source: AK7set+aKtHT47VRH8DyIdNfx/f0hKmLdFyN9NvIV/CDk2z+2C0a9NSIthao7zDuaBAZvdv19L5hsP3F8x0bactMoJQ= X-Received: by 2002:ad4:5911:0:b0:56e:9f6c:c266 with SMTP id ez17-20020ad45911000000b0056e9f6cc266mr2757975qvb.5.1677767213798; Thu, 02 Mar 2023 06:26:53 -0800 (PST) MIME-Version: 1.0 References: <20230125101334.1069060-1-peternewman@google.com> <20230125101334.1069060-4-peternewman@google.com> In-Reply-To: From: Peter Newman Date: Thu, 2 Mar 2023 15:26:43 +0100 Message-ID: Subject: Re: [PATCH v3 3/3] x86/resctrl: Implement rename op for mon groups To: Reinette Chatre Cc: fenghua.yu@intel.com, Babu.Moger@amd.com, bp@alien8.de, dave.hansen@linux.intel.com, eranian@google.com, gupasani@google.com, hpa@zytor.com, james.morse@arm.com, linux-kernel@vger.kernel.org, mingo@redhat.com, skodak@google.com, tony.luck@intel.com, tglx@linutronix.de, x86@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Reinette, On Sat, Feb 11, 2023 at 12:21=E2=80=AFAM Reinette Chatre wrote: > On 1/25/2023 2:13 AM, Peter Newman wrote: > > --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c > > +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c > > @@ -3238,6 +3238,80 @@ static int rdtgroup_rmdir(struct kernfs_node *kn= ) > > return ret; > > } > > > > +static void mongrp_move(struct rdtgroup *rdtgrp, struct rdtgroup *new_= prdtgrp, > > + cpumask_var_t cpus) > > Could you please add some function comments on what is moved and how it i= s accomplished? Sure, I think I should also make the name more descriptive. I think "move" is too high-level here. > > + for_each_process_thread(p, t) { > > + if (is_closid_match(t, prdtgrp) && is_rmid_match(t, rdtgr= p)) > > + rdt_move_one_task(t, new_prdtgrp->closid, t->rmid= , > > + cpus); > > + } > > + read_unlock(&tasklist_lock); > > Can rdt_move_group_tasks() be used here? As it is today, rdt_move_group_tasks() would move too many tasks. mongrp_move() needs both the CLOSID and RMID to match, while rdt_move_group_tasks() only needs 0-1 of the two to match. I tried adding more parameters to rdt_move_group_tasks() to change the move condition, but I couldn't make the resulting code not look gross and after factoring the tricky stuff into rdt_move_one_task(), rdt_move_group_tasks() didn't look interesting enough to reuse. > > > + > > + update_closid_rmid(cpus, NULL); > > +} > > I see the tasks in a monitor group handled. There is another way to creat= e/manage > a monitor group. For example, by not writing tasks to the tasks file but = instead > to write CPU ids to the CPU file. All tasks on a particular CPU will be m= onitored > by that group. One rule is that a MON group may only have CPUs that are o= wned by > the CTRL_MON group. > It is not clear to me how such a group is handled in this work. Right, I overlooked this form of monitoring. I don't think it makes sense to move such a monitoring group, because a CPU can only be assigned to one CTRL_MON group. Therefore a move of a MON group with an assigned CPU will always violate the parent CTRL_MON group rule after the move. Based on this, I think rename of a MON group should fail when rdtgrp->cpu_mask is non-zero. > > > > + > > +static int rdtgroup_rename(struct kernfs_node *kn, > > + struct kernfs_node *new_parent, const char *ne= w_name) > > +{ > > + struct rdtgroup *new_prdtgrp; > > + struct rdtgroup *rdtgrp; > > + cpumask_var_t tmpmask; > > + int ret; > > + > > + if (!zalloc_cpumask_var(&tmpmask, GFP_KERNEL)) > > + return -ENOMEM; > > + > > + rdtgrp =3D kernfs_to_rdtgroup(kn); > > + new_prdtgrp =3D kernfs_to_rdtgroup(new_parent); > > + if (!rdtgrp || !new_prdtgrp) { > > + free_cpumask_var(tmpmask); > > + return -EPERM; > > + } > > + > > How robust is this against user space attempting to move files? I'm not sure I understand the question. Can you be more specific? > > > + /* Release both kernfs active_refs before obtaining rdtgroup mute= x. */ > > + rdtgroup_kn_get(rdtgrp, kn); > > + rdtgroup_kn_get(new_prdtgrp, new_parent); > > + > > + mutex_lock(&rdtgroup_mutex); > > + > > + if ((rdtgrp->flags & RDT_DELETED) || (new_prdtgrp->flags & RDT_DE= LETED)) { > > + ret =3D -ESRCH; > > + goto out; > > + } > > + > > + /* Only a mon group can be moved to a new mon_groups directory. *= / > > + if (rdtgrp->type !=3D RDTMON_GROUP || > > + !is_mon_groups(new_parent, kn->name)) { > > + ret =3D -EPERM; > > + goto out; > > + } > > + > > Should in-place moves be allowed? I don't think it's useful in the context of the intended use case. Also, it looks like kernfs_rename() would fail with EEXIST if I tried. If it were important to someone, I could make it succeed before calling into kernfs_rename(). > > > + ret =3D kernfs_rename(kn, new_parent, new_name); > > + if (ret) > > + goto out; > > + > > + mongrp_move(rdtgrp, new_prdtgrp, tmpmask); > > + > > Can tmpmask allocation/free be done in mongrp_move()? Yes, but it looked like most other functions in this file allocate the cpumask up front before validating parameters. If you have a preference for internalizing its allocation within mongrp_move(), I can try it. Thank you for your review. -Peter