Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3804908imm; Tue, 29 May 2018 14:07:44 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJu6XkUstI+yJ0ZzyFbJJCsywibbkkCfdNiwy9iGavrb+u7I1ttvbS3Io7bAtyZfLcj0YTs X-Received: by 2002:a17:902:988b:: with SMTP id s11-v6mr45272plp.304.1527628064509; Tue, 29 May 2018 14:07:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527628064; cv=none; d=google.com; s=arc-20160816; b=CxJzddqHt7MttGwlkH4Ss5b57n3qddqSYqWFLlxhI0pWGw8ts2gG1jcR7gtmOGOuSs pTxVYH2CVztP5sfTgD+PPvMrg99AWChKewbOmUkyJtZ2vciKcYT5Th81j9JTes3kLl8a JjwkXEQYrP/GYDfoiXkK/HrbmTQCy9/sX6ELAcYY5EU6f1NH4af5rtvGmFZJZybnH+Kw je6WVC03VUua7au29iRL7EcOQ/r0BjBVCJCdY9cLFVUynpuK3LbL7YIFazltk0iqcrYn K+u+BrWyxjIvQXYzTTD2EMAD3ifawgqMqxzTqPcOit5GJx7Dnc81eJyEyRFnxz/RXwkE inBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=A2cS18T5OSFVbZyKftkCYTXaJGa1qHX43s0s3bydS6s=; b=S23Fj95OyF42Dz1IKcurB8blGlE9olZItfKXFom36hDAXQ0WEfcL2nXbDqmHDF9Cxm CjNRZ2Q+sPwDDLss0k+EqI9nTZ0f+voh7KLYLM0u8N1FP/ntGM8Idm8u/dv21urxZJWI 7ZZN2S6WtUQl+sDwuhTRoG6sBRawB5+SzKpLwgjnCizYKVTHLKtOu6lWGBWd0wbq0DZP /HWbjOHWAT5Vc3hEBFOwGRwsu+DxbwJZ17h+Vq/obMoNCWWSUZ2AtmV9Bn07dgwdrIl2 qoZYeoVP0Y+yquBFU49v2HU5vk8mb2xsWiiJtK3ZFNpxBIsYvlq7NuapetR0jqOWFrCk j1dg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a66-v6si35090736pfb.81.2018.05.29.14.07.30; Tue, 29 May 2018 14:07:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966974AbeE2VGF (ORCPT + 99 others); Tue, 29 May 2018 17:06:05 -0400 Received: from mga01.intel.com ([192.55.52.88]:53956 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935777AbeE2VBB (ORCPT ); Tue, 29 May 2018 17:01:01 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 May 2018 14:00:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,457,1520924400"; d="scan'208";a="60127688" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by orsmga001.jf.intel.com with ESMTP; 29 May 2018 14:00:54 -0700 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com, vikas.shivappa@linux.intel.com Cc: gavin.hindman@intel.com, jithu.joseph@intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre Subject: [PATCH V5 07/38] x86/intel_rdt: Initialize new resource group with sane defaults Date: Tue, 29 May 2018 05:57:32 -0700 Message-Id: <68030110df15615d95731fd354ff7adfb0476d37.1527593970.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently when a new resource group is created its allocations would be those that belonged to the resource group to which its closid belonged previously. That is, we can encounter a case like: mkdir newgroup cat newgroup/schemata L2:0=ff;1=ff echo 'L2:0=0xf0;1=0xf0' > newgroup/schemata cat newgroup/schemata L2:0=0xf0;1=0xf0 rmdir newgroup mkdir newnewgroup cat newnewgroup/schemata L2:0=0xf0;1=0xf0 When the new group is created it would be reasonable to expect its allocations to be initialized with all regions that it can possibly use. At this time these regions would be all that are shareable by other resource groups as well as regions that are not currently used. When a new resource group is created the hardware is initialized with these new default allocations. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 69 ++++++++++++++++++++++++++++++-- 1 file changed, 66 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c index 35e538eed977..b2008c697ce0 100644 --- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c +++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c @@ -133,7 +133,7 @@ void closid_free(int closid) * Return: true if @closid is currently associated with a resource group, * false if @closid is free */ -static bool __attribute__ ((unused)) closid_allocated(unsigned int closid) +static bool closid_allocated(unsigned int closid) { return (closid_free_map & (1 << closid)) == 0; } @@ -1799,6 +1799,64 @@ static int mkdir_mondata_all(struct kernfs_node *parent_kn, return ret; } +/** + * rdtgroup_init_alloc - Initialize the new RDT group's allocations + * + * A new RDT group is being created on an allocation capable (CAT) + * supporting system. Set this group up to start off with all usable + * allocations. That is, all shareable and unused bits. + * + * All-zero CBM is invalid. If there are no more shareable bits available + * on any domain then the entire allocation will fail. + */ +static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) +{ + u32 used_b = 0, unused_b = 0; + u32 closid = rdtgrp->closid; + struct rdt_resource *r; + enum rdtgrp_mode mode; + struct rdt_domain *d; + int i, ret; + u32 *ctrl; + + for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(d, &r->domains, list) { + d->have_new_ctrl = false; + d->new_ctrl = r->cache.shareable_bits; + used_b = r->cache.shareable_bits; + ctrl = d->ctrl_val; + for (i = 0; i < r->num_closid; i++, ctrl++) { + if (closid_allocated(i) && i != closid) { + mode = rdtgroup_mode_by_closid(i); + used_b |= *ctrl; + if (mode == RDT_MODE_SHAREABLE) + d->new_ctrl |= *ctrl; + } + } + unused_b = used_b ^ (BIT_MASK(r->cache.cbm_len) - 1); + unused_b &= BIT_MASK(r->cache.cbm_len) - 1; + d->new_ctrl |= unused_b; + if (d->new_ctrl == 0) { + rdt_last_cmd_printf("no space on %s:%d\n", + r->name, d->id); + return -ENOSPC; + } + d->have_new_ctrl = true; + } + } + + for_each_alloc_enabled_rdt_resource(r) { + ret = update_domains(r, rdtgrp->closid); + if (ret < 0) { + rdt_last_cmd_puts("failed to initialize allocations\n"); + return ret; + } + rdtgrp->mode = RDT_MODE_SHAREABLE; + } + + return 0; +} + static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, struct kernfs_node *prgrp_kn, const char *name, umode_t mode, @@ -1957,6 +2015,10 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = 0; rdtgrp->closid = closid; + ret = rdtgroup_init_alloc(rdtgrp); + if (ret < 0) + goto out_id_free; + list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups); if (rdt_mon_capable) { @@ -1967,15 +2029,16 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, ret = mongroup_create_dir(kn, NULL, "mon_groups", NULL); if (ret) { rdt_last_cmd_puts("kernfs subdir error\n"); - goto out_id_free; + goto out_del_list; } } goto out_unlock; +out_del_list: + list_del(&rdtgrp->rdtgroup_list); out_id_free: closid_free(closid); - list_del(&rdtgrp->rdtgroup_list); out_common_fail: mkdir_rdt_prepare_clean(rdtgrp); out_unlock: -- 2.13.6