Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp92449imm; Thu, 7 Jun 2018 14:26:33 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIptWOdjxDh6t2vD81h7Pt0l7C9iImCR/uDK7cMOH+sZ9kPoGE9XU9HRpYmrZG7YBYSdbzo X-Received: by 2002:a17:902:bd95:: with SMTP id q21-v6mr3519673pls.237.1528406793070; Thu, 07 Jun 2018 14:26:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528406793; cv=none; d=google.com; s=arc-20160816; b=jbjPaaCD9+AB5bwBKI5PiLr44EppvuoBf3T5xDnkbBXeDTAAg3NWN6bsI3CoR3JGGu uWyFPdwYtQJR5iQUsAbCKZZJso0KVzM2CepeRqprG99gqUWKXC0dVKgFTrNFF2yx1NpY /aEumJAp8QkzNepPfiylgVTqXZHB0cfbtH+yTnAU5GVDwsO/nQZaWPkpexGObfYA7XHM Sc4nVXGFfGV/ZQXY+KqfGso9qpPRzjgoh5Mw2MCYkGziSXI1VihXWqILW487FkLmPixO 5zrXvrfYEBRM6UDV8I5EChBMBvXCMqTBL64JqsvTh2kF5cDe3ceZJ/HVfMkQaEIAXH8o ELUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=x652hKlJAoKkljSOCrw0W6i7b+Bgh6X2RFzxS4eGp+E=; b=RsdnEM3hZkOxoBXqGfazsZBwD65gTz7rwxgRYXL6HrAbp5eCwwYvgNBgZBnyaVZpX8 4xZKnVTrXTfB4jkCiHQVZVCZrSIUO9xjE+3NvVhTymhWzFSkddXSIYNEbEcalh2O9zly QFrI/NGc2mVoxoXimpvUmsCF7CdyFxfqTRoKKZnFwMa6VU6gpZ3WGB6GZpmI9O/jeHfm qAGZyN2Op2DM/t/hgUqn6vx+cnf8470e3XLInUp2Bk5cZ7/AUPWFzxN8GP7vbxOUoTSd /JgGFwBIOytf/8FmlASmy5AI0SSOGX6tz6wxvaWeoADOJAb10kklJ7zc14MG5VymDFw9 SM5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y11-v6si54312915plg.376.2018.06.07.14.25.48; Thu, 07 Jun 2018 14:26:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752427AbeFGVYg (ORCPT + 99 others); Thu, 7 Jun 2018 17:24:36 -0400 Received: from mga14.intel.com ([192.55.52.115]:38470 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750903AbeFGVYf (ORCPT ); Thu, 7 Jun 2018 17:24:35 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jun 2018 14:24:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,488,1520924400"; d="scan'208";a="47299978" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by orsmga007.jf.intel.com with ESMTP; 07 Jun 2018 14:24:34 -0700 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com, vikas.shivappa@linux.intel.com Cc: gavin.hindman@intel.com, jithu.joseph@intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre Subject: [PATCH V6 07/38] x86/intel_rdt: Initialize new resource group with sane defaults Date: Thu, 7 Jun 2018 14:24:25 -0700 Message-Id: <44af4ecef879e88ec1b74c5decbf5dccaf998866.1528405422.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <68030110df15615d95731fd354ff7adfb0476d37.1527593970.git.reinette.chatre@intel.com> References: <68030110df15615d95731fd354ff7adfb0476d37.1527593970.git.reinette.chatre@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently when a new resource group is created its allocations would be those that belonged to the resource group to which its closid belonged previously. That is, we can encounter a case like: mkdir newgroup cat newgroup/schemata L2:0=ff;1=ff echo 'L2:0=0xf0;1=0xf0' > newgroup/schemata cat newgroup/schemata L2:0=0xf0;1=0xf0 rmdir newgroup mkdir newnewgroup cat newnewgroup/schemata L2:0=0xf0;1=0xf0 When the new group is created it would be reasonable to expect its allocations to be initialized with all regions that it can possibly use. At this time these regions would be all that are shareable by other resource groups as well as regions that are not currently used. If the available cache region is found to be non-contiguous the available region is adjusted to enforce validity. When a new resource group is created the hardware is initialized with these new default allocations. Signed-off-by: Reinette Chatre --- V6: The cache region that is available for use by a new resource group may not be contiguous. Enforce the available region to be valid by selecting only the first contiguous portion. The goal is to ensure a sane and valid default on resource group creation, the user still has the ability to modify this default if it does not meet requirements. arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 107 ++++++++++++++++++++++++++++++- 1 file changed, 104 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c index 35e538eed977..7ae798a8ebf6 100644 --- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c +++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c @@ -133,7 +133,7 @@ void closid_free(int closid) * Return: true if @closid is currently associated with a resource group, * false if @closid is free */ -static bool __attribute__ ((unused)) closid_allocated(unsigned int closid) +static bool closid_allocated(unsigned int closid) { return (closid_free_map & (1 << closid)) == 0; } @@ -1799,6 +1799,102 @@ static int mkdir_mondata_all(struct kernfs_node *parent_kn, return ret; } +/** + * cbm_ensure_valid - Enforce validity on provided CBM + * @_val: Candidate CBM + * @r: RDT resource to which the CBM belongs + * + * The provided CBM represents all cache portions available for use. This + * may be represented by a bitmap that does not consist of contiguous ones + * and thus be an invalid CBM. + * Here the provided CBM is forced to be a valid CBM by only considering + * the first set of contiguous bits as valid and clearing all bits. + * The intention here is to provide a valid default CBM with which a new + * resource group is initialized. The user can follow this with a + * modification to the CBM if the default does not satisfy the + * requirements. + */ +static void cbm_ensure_valid(u32 *_val, struct rdt_resource *r) +{ + unsigned long *val = (unsigned long *)_val; + unsigned int cbm_len = r->cache.cbm_len; + unsigned long first_bit, zero_bit; + + if (*val == 0) + return; + + first_bit = find_first_bit(val, cbm_len); + zero_bit = find_next_zero_bit(val, cbm_len, first_bit); + + /* Clear any remaining bits to ensure contiguous region */ + bitmap_clear(val, zero_bit, cbm_len - zero_bit); +} + +/** + * rdtgroup_init_alloc - Initialize the new RDT group's allocations + * + * A new RDT group is being created on an allocation capable (CAT) + * supporting system. Set this group up to start off with all usable + * allocations. That is, all shareable and unused bits. + * + * All-zero CBM is invalid. If there are no more shareable bits available + * on any domain then the entire allocation will fail. + */ +static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) +{ + u32 used_b = 0, unused_b = 0; + u32 closid = rdtgrp->closid; + struct rdt_resource *r; + enum rdtgrp_mode mode; + struct rdt_domain *d; + int i, ret; + u32 *ctrl; + + for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(d, &r->domains, list) { + d->have_new_ctrl = false; + d->new_ctrl = r->cache.shareable_bits; + used_b = r->cache.shareable_bits; + ctrl = d->ctrl_val; + for (i = 0; i < r->num_closid; i++, ctrl++) { + if (closid_allocated(i) && i != closid) { + mode = rdtgroup_mode_by_closid(i); + used_b |= *ctrl; + if (mode == RDT_MODE_SHAREABLE) + d->new_ctrl |= *ctrl; + } + } + unused_b = used_b ^ (BIT_MASK(r->cache.cbm_len) - 1); + unused_b &= BIT_MASK(r->cache.cbm_len) - 1; + d->new_ctrl |= unused_b; + /* + * Force the initial CBM to be valid, user can + * modify the CBM based on system availability. + */ + cbm_ensure_valid(&d->new_ctrl, r); + if (bitmap_weight((unsigned long *) &d->new_ctrl, + r->cache.cbm_len) < + r->cache.min_cbm_bits) { + rdt_last_cmd_printf("no space on %s:%d\n", + r->name, d->id); + return -ENOSPC; + } + d->have_new_ctrl = true; + } + } + + for_each_alloc_enabled_rdt_resource(r) { + ret = update_domains(r, rdtgrp->closid); + if (ret < 0) { + rdt_last_cmd_puts("failed to initialize allocations\n"); + return ret; + } + rdtgrp->mode = RDT_MODE_SHAREABLE; + } + + return 0; +} + static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, struct kernfs_node *prgrp_kn, const char *name, umode_t mode, @@ -1957,6 +2053,10 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = 0; rdtgrp->closid = closid; + ret = rdtgroup_init_alloc(rdtgrp); + if (ret < 0) + goto out_id_free; + list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups); if (rdt_mon_capable) { @@ -1967,15 +2067,16 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = mongroup_create_dir(kn, NULL, "mon_groups", NULL); if (ret) { rdt_last_cmd_puts("kernfs subdir error\n"); - goto out_id_free; + goto out_del_list; } } goto out_unlock; +out_del_list: + list_del(&rdtgrp->rdtgroup_list); out_id_free: closid_free(closid); - list_del(&rdtgrp->rdtgroup_list); out_common_fail: mkdir_rdt_prepare_clean(rdtgrp); out_unlock: -- 2.13.6