Received: by 10.192.165.148 with SMTP id m20csp1026575imm; Wed, 25 Apr 2018 11:20:45 -0700 (PDT) X-Google-Smtp-Source: AIpwx48Lup7faB+kyFigQCQVOAXbfXS2eobclPtgnpzTpRb29JLN5jb74VoPKUq5d0oOYsj4qj8x X-Received: by 10.99.116.8 with SMTP id p8mr24245513pgc.327.1524680445666; Wed, 25 Apr 2018 11:20:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524680445; cv=none; d=google.com; s=arc-20160816; b=wbFnsNPPTlm0rCMTe/P/KgL0IJZfVbfvLo/db7720knEWCIIQ59QxrJGa3AIz1Ft7w 4u6Gvw0KG1hUbTSzij7DwXFty5ei0ubBql1rQnuVFT0UvRtUUIsIGDWRO+jhLLg32WoI N3iCKZXQkjMEvZJnfY/uuV0PU1yBo5eOPjStW6dwmT8OiZouf3gUim9ffpzImM6JskZV j0EwEKsLlCu2XLGDQnrTzvMHHquKuW8Ls4vEmK+uE/9Qmmm9FSgh2cm9fl68gMym0kXz jLApqqx/zh3malV1DeEqKCknuvjkj2z/k6ZfQfsxKQtn2DZyTbxy7AN7usQ3Q20gpbtl 1VYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=TLPyl4JvDSBss68UREQnSd9VN5KWjRKEPD89Mb5vByU=; b=SdFSXdzgcJvqwsDDQZZDJCBcPkWDr3mm8RYcfKFqKiT5tq9ldye4xbvnhyIH2NFYzB NIxonWDodLS6ouVZFL6Brav7vIRqEBWX9zR5PaQ/rvabwVsevQLwJwLAXt6Dt5Ehq5NB bdkwv1IjMs9M9P0s2WY6uGjevEMWFyd+xB2BMjRZunslfP3Ghl9mS1P24udF/utrUL/E noN5YzlAzzny7Nnfy8HCyufqpTOhk43S30uki1bbwx144tX4W5yknaVEF1Jzbf0vaSCv qmtq8+GSacu/RrAcCZPIxrzyI4glm2G3mSxWt3NqwHlH9FynMMcWaBwCEPYj63t85YBX VYyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w13si13669856pgm.91.2018.04.25.11.20.31; Wed, 25 Apr 2018 11:20:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932239AbeDYSMu (ORCPT + 99 others); Wed, 25 Apr 2018 14:12:50 -0400 Received: from mga02.intel.com ([134.134.136.20]:41709 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755937AbeDYSMd (ORCPT ); Wed, 25 Apr 2018 14:12:33 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Apr 2018 11:12:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,327,1520924400"; d="scan'208";a="35243636" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by fmsmga008.fm.intel.com with ESMTP; 25 Apr 2018 11:12:32 -0700 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com, vikas.shivappa@linux.intel.com Cc: gavin.hindman@intel.com, jithu.joseph@intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre Subject: [PATCH V3 07/39] x86/intel_rdt: Initialize new resource group with sane defaults Date: Wed, 25 Apr 2018 03:09:43 -0700 Message-Id: <4c08b710419f4d381442911dd6f836f6452cd9f9.1524649902.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently when a new resource group is created its allocations would be those that belonged to the resource group to which its closid belonged previously. That is, we can encounter a case like: mkdir newgroup cat newgroup/schemata L2:0=ff;1=ff echo 'L2:0=0xf0;1=0xf0' > newgroup/schemata cat newgroup/schemata L2:0=0xf0;1=0xf0 rmdir newgroup mkdir newnewgroup cat newnewgroup/schemata L2:0=0xf0;1=0xf0 When the new group is created it would be reasonable to expect its allocations to be initialized with all regions that it can possibly use. At this time these regions would be all that are shareable by other resource groups as well as regions that are not currently used. When a new resource group is created the hardware is initialized with these new default allocations. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 69 ++++++++++++++++++++++++++++++-- 1 file changed, 66 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c index bc392ff597c3..d66283f83ece 100644 --- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c +++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c @@ -133,7 +133,7 @@ void closid_free(int closid) * Return: true if @closid is currently associated with a resource group, * false if @closid is free */ -static bool __attribute__ ((unused)) closid_allocated(unsigned int closid) +static bool closid_allocated(unsigned int closid) { return (closid_free_map & (1 << closid)) == 0; } @@ -1766,6 +1766,64 @@ static int mkdir_mondata_all(struct kernfs_node *parent_kn, return ret; } +/** + * rdtgroup_init_alloc - Initialize the new RDT group's allocations + * + * A new RDT group is being created on an allocation capable (CAT) + * supporting system. Set this group up to start off with all usable + * allocations. That is, all shareable and unused bits. + * + * All-zero CBM is invalid. If there are no more shareable bits available + * on any domain then the entire allocation will fail. + */ +static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) +{ + u32 used_b = 0, unused_b = 0; + u32 closid = rdtgrp->closid; + struct rdt_resource *r; + enum rdtgrp_mode mode; + struct rdt_domain *d; + int i, ret; + u32 *ctrl; + + for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(d, &r->domains, list) { + d->have_new_ctrl = false; + d->new_ctrl = r->cache.shareable_bits; + used_b = r->cache.shareable_bits; + ctrl = d->ctrl_val; + for (i = 0; i < r->num_closid; i++, ctrl++) { + if (closid_allocated(i) && i != closid) { + mode = rdtgroup_mode_by_closid(i); + used_b |= *ctrl; + if (mode == RDT_MODE_SHAREABLE) + d->new_ctrl |= *ctrl; + } + } + unused_b = used_b ^ (BIT_MASK(r->cache.cbm_len) - 1); + unused_b &= BIT_MASK(r->cache.cbm_len) - 1; + d->new_ctrl |= unused_b; + if (d->new_ctrl == 0) { + rdt_last_cmd_printf("no space on %s:%d\n", + r->name, d->id); + return -ENOSPC; + } + d->have_new_ctrl = true; + } + } + + for_each_alloc_enabled_rdt_resource(r) { + ret = update_domains(r, rdtgrp->closid); + if (ret < 0) { + rdt_last_cmd_puts("failed to initialize allocations\n"); + return ret; + } + rdtgrp->mode = RDT_MODE_SHAREABLE; + } + + return 0; +} + static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, struct kernfs_node *prgrp_kn, const char *name, umode_t mode, @@ -1923,6 +1981,10 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, closid = ret; rdtgrp->closid = closid; + ret = rdtgroup_init_alloc(rdtgrp); + if (ret < 0) + goto out_id_free; + list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups); if (rdt_mon_capable) { @@ -1933,15 +1995,16 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = mongroup_create_dir(kn, NULL, "mon_groups", NULL); if (ret) { rdt_last_cmd_puts("kernfs subdir error\n"); - goto out_id_free; + goto out_del_list; } } goto out_unlock; +out_del_list: + list_del(&rdtgrp->rdtgroup_list); out_id_free: closid_free(closid); - list_del(&rdtgrp->rdtgroup_list); out_common_fail: mkdir_rdt_prepare_clean(rdtgrp); out_unlock: -- 2.13.6