Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1985128imm; Tue, 22 May 2018 12:39:46 -0700 (PDT) X-Google-Smtp-Source: AB8JxZphVcRTIadzREyo7i/UZE/TiVCwZJ0eLJjbzXWK2B7PjDC4Yi/2MzX0QZzbj4PtTu3ld4NQ X-Received: by 2002:a62:d751:: with SMTP id v17-v6mr25399151pfl.39.1527017986201; Tue, 22 May 2018 12:39:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527017986; cv=none; d=google.com; s=arc-20160816; b=o+HJXZT5FLDTp/MV0YlC1z0FLs2jvvc8SM8Eo2L3ntf43MGmq4w5LPsXjATiKYjFK7 tI+HfpzJ6w5xjWkvgswUa7Xk7yGJnRyFSPR9at7C6CDmz5AgmZRBC2tDMUQOzo9OCLnC NFIknLpAOQlXCGUr3DRBXd/SPrTy7j0+uZ1TsBIgw8LDXj62ZjAMVLCVCJn2CV9FRH72 N9q25EjiboPFDOup6GQBa2FUuV15q9bEfXm8baHLrvzH8WXcUNdK7/ohopUO+0cO0L+G NYCJ/YCajyo4A0FhclEtvso5uqkSXOsll05TE6tQ6KkRXv1r+tgf7kMq4wqzYsz75ovq DQZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=A2cS18T5OSFVbZyKftkCYTXaJGa1qHX43s0s3bydS6s=; b=0kAEzponnO1xO3ZRqGaOEsIC8Eig7cCi2JsU9QrLCGu2bRhT0hwjuLsxLe1VpU7CEj q/w9I/A4jdF7gg1Z436sMiA9aDjaWhtaPEBNsd1DnYF/qmsLFxMET4WgGhZqX4ZWvXwL sGgh5Sa/K0kIKFuXHgTA6ZxZ2Y1NzgPRa35qbxW6H2SvLUmxq7ggr0dfByhmYuOpu4BF wpUXqaqBP0mFiI7zLZu7VAKaeTSqmN+ebCHLEvr1rNN+z6C3u0dhtk17h6dWLQSQzjL7 d/Dy1Zl9UkPztZtDS1w3eUlnltHOGAXBaDdBaYUpeQgn0RKn36QFyIRMES5BowK3VMVz J4Mg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b1-v6si16825792plc.403.2018.05.22.12.39.22; Tue, 22 May 2018 12:39:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752959AbeEVTiC (ORCPT + 99 others); Tue, 22 May 2018 15:38:02 -0400 Received: from mga07.intel.com ([134.134.136.100]:16707 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752478AbeEVTcD (ORCPT ); Tue, 22 May 2018 15:32:03 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 May 2018 12:32:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,430,1520924400"; d="scan'208";a="226406356" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by orsmga005.jf.intel.com with ESMTP; 22 May 2018 12:31:59 -0700 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com, vikas.shivappa@linux.intel.com Cc: gavin.hindman@intel.com, jithu.joseph@intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre Subject: [PATCH V4 07/38] x86/intel_rdt: Initialize new resource group with sane defaults Date: Tue, 22 May 2018 04:28:55 -0700 Message-Id: <68030110df15615d95731fd354ff7adfb0476d37.1526987654.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently when a new resource group is created its allocations would be those that belonged to the resource group to which its closid belonged previously. That is, we can encounter a case like: mkdir newgroup cat newgroup/schemata L2:0=ff;1=ff echo 'L2:0=0xf0;1=0xf0' > newgroup/schemata cat newgroup/schemata L2:0=0xf0;1=0xf0 rmdir newgroup mkdir newnewgroup cat newnewgroup/schemata L2:0=0xf0;1=0xf0 When the new group is created it would be reasonable to expect its allocations to be initialized with all regions that it can possibly use. At this time these regions would be all that are shareable by other resource groups as well as regions that are not currently used. When a new resource group is created the hardware is initialized with these new default allocations. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 69 ++++++++++++++++++++++++++++++-- 1 file changed, 66 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c index 35e538eed977..b2008c697ce0 100644 --- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c +++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c @@ -133,7 +133,7 @@ void closid_free(int closid) * Return: true if @closid is currently associated with a resource group, * false if @closid is free */ -static bool __attribute__ ((unused)) closid_allocated(unsigned int closid) +static bool closid_allocated(unsigned int closid) { return (closid_free_map & (1 << closid)) == 0; } @@ -1799,6 +1799,64 @@ static int mkdir_mondata_all(struct kernfs_node *parent_kn, return ret; } +/** + * rdtgroup_init_alloc - Initialize the new RDT group's allocations + * + * A new RDT group is being created on an allocation capable (CAT) + * supporting system. Set this group up to start off with all usable + * allocations. That is, all shareable and unused bits. + * + * All-zero CBM is invalid. If there are no more shareable bits available + * on any domain then the entire allocation will fail. + */ +static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) +{ + u32 used_b = 0, unused_b = 0; + u32 closid = rdtgrp->closid; + struct rdt_resource *r; + enum rdtgrp_mode mode; + struct rdt_domain *d; + int i, ret; + u32 *ctrl; + + for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(d, &r->domains, list) { + d->have_new_ctrl = false; + d->new_ctrl = r->cache.shareable_bits; + used_b = r->cache.shareable_bits; + ctrl = d->ctrl_val; + for (i = 0; i < r->num_closid; i++, ctrl++) { + if (closid_allocated(i) && i != closid) { + mode = rdtgroup_mode_by_closid(i); + used_b |= *ctrl; + if (mode == RDT_MODE_SHAREABLE) + d->new_ctrl |= *ctrl; + } + } + unused_b = used_b ^ (BIT_MASK(r->cache.cbm_len) - 1); + unused_b &= BIT_MASK(r->cache.cbm_len) - 1; + d->new_ctrl |= unused_b; + if (d->new_ctrl == 0) { + rdt_last_cmd_printf("no space on %s:%d\n", + r->name, d->id); + return -ENOSPC; + } + d->have_new_ctrl = true; + } + } + + for_each_alloc_enabled_rdt_resource(r) { + ret = update_domains(r, rdtgrp->closid); + if (ret < 0) { + rdt_last_cmd_puts("failed to initialize allocations\n"); + return ret; + } + rdtgrp->mode = RDT_MODE_SHAREABLE; + } + + return 0; +} + static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, struct kernfs_node *prgrp_kn, const char *name, umode_t mode, @@ -1957,6 +2015,10 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = 0; rdtgrp->closid = closid; + ret = rdtgroup_init_alloc(rdtgrp); + if (ret < 0) + goto out_id_free; + list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups); if (rdt_mon_capable) { @@ -1967,15 +2029,16 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = mongroup_create_dir(kn, NULL, "mon_groups", NULL); if (ret) { rdt_last_cmd_puts("kernfs subdir error\n"); - goto out_id_free; + goto out_del_list; } } goto out_unlock; +out_del_list: + list_del(&rdtgrp->rdtgroup_list); out_id_free: closid_free(closid); - list_del(&rdtgrp->rdtgroup_list); out_common_fail: mkdir_rdt_prepare_clean(rdtgrp); out_unlock: -- 2.13.6