Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1984194imm; Tue, 22 May 2018 12:38:44 -0700 (PDT) X-Google-Smtp-Source: AB8JxZri7jC4vQTelcPtIvDTc5E8vnG1y+sxYU1EgoMo9d5YhIGAbx3joM9r8ayUs+sb4QPe2JSp X-Received: by 2002:a17:902:9f98:: with SMTP id g24-v6mr22182706plq.152.1527017924794; Tue, 22 May 2018 12:38:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527017924; cv=none; d=google.com; s=arc-20160816; b=WCKAcO31z0cldGn/wUC/hNKMPJKOG3eWO8vm6A0y59ELJxsAlMxhi6I1TmMjFodxHo qb5GQW+8jbl9Bze60+DRoteWnze/3DwD5eG+cTRpCbdRmWwcV4bCVXLEDF01w2AAxr3H DZCAvBieBIM6cgmuPf6SDdSuyCbiyT9kvqsAZ4SWOmn6Em+iG1IlvUuFWyfSJfxWSQJ3 3urSfPt3z3odSw2p9SCYcXHp8YsRr/rsR53/ch93NKINNDQVLB36xYVF0xFsLu3XXxHM kMrFP9bc3r9gs9hX9yATL6OmJ8EyXTdX3g9ZgA+v9pWWQq9p2Hy9T5aO4D3sZh4V9lz1 mQEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=k+dvN6Ea0K7cwCU3H1PGSvJ6j8xnhZuyIwq78iLQCsk=; b=xcgwEmucZideXL3UlvexvLHx6MuniR/KJxRmAc8S0FaSpHYf/QzTOEidetnosvp9av pJHuXQuSs3zfGWehToSAVSny3HzHWzFwSYvcjL/AsOHIv5Bc2RoTraNVlAUWx3hY6iLx R03CBd/VxzzA8DZmljgSjOzLskx8cdgERgbVgahd2hS5lza0UwP0urobxYZM3BvP9zIZ oDkcBMlbujWd6rUcHISYmNNSaX5N5ZPbKN5nkEym0J4R6A5DYDrrxI11I5tuY8B9gmFi hnMFjxbpvcwfZDJgSRg6o14tiMDNb2TYhtPSxgkcCVhw5k9wBDZkXgwttEeDVyx7Ih52 RPuw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h68-v6si13292998pgc.158.2018.05.22.12.38.28; Tue, 22 May 2018 12:38:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752824AbeEVThZ (ORCPT + 99 others); Tue, 22 May 2018 15:37:25 -0400 Received: from mga07.intel.com ([134.134.136.100]:16707 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752517AbeEVTcD (ORCPT ); Tue, 22 May 2018 15:32:03 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 May 2018 12:32:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,430,1520924400"; d="scan'208";a="226406377" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by orsmga005.jf.intel.com with ESMTP; 22 May 2018 12:32:00 -0700 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com, vikas.shivappa@linux.intel.com Cc: gavin.hindman@intel.com, jithu.joseph@intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre Subject: [PATCH V4 14/38] x86/intel_rdt: Display resource groups' allocations' size in bytes Date: Tue, 22 May 2018 04:29:02 -0700 Message-Id: <60c85d9cd5a6c2a15f61e24c7bc2e08828443a17.1526987654.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The schemata file displays the allocations associated with each domain of each resource. The syntax of this file reflects the capacity bitmask (CBM) of the actual allocation. In order to determine the actual size of an allocation the user needs to dig through three different files to query the variables needed to compute it (the cache size, the CBM length, and the schemata). Introduce a new file "size" associated with each resource group that will mirror the schemata file syntax and display the size in bytes of each allocation. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/intel_rdt.h | 2 + arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 81 ++++++++++++++++++++++++++++++++ 2 files changed, 83 insertions(+) diff --git a/arch/x86/kernel/cpu/intel_rdt.h b/arch/x86/kernel/cpu/intel_rdt.h index 68d398bc2942..8bbb047bf37c 100644 --- a/arch/x86/kernel/cpu/intel_rdt.h +++ b/arch/x86/kernel/cpu/intel_rdt.h @@ -467,6 +467,8 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of, struct seq_file *s, void *v); bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d, u32 _cbm, int closid, bool exclusive); +unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, struct rdt_domain *d, + u32 cbm); enum rdtgrp_mode rdtgroup_mode_by_closid(int closid); struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r); int update_domains(struct rdt_resource *r, int closid); diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c index d0040f83532d..2af99e03faae 100644 --- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c +++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c @@ -20,6 +20,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include #include #include @@ -1016,6 +1017,78 @@ static ssize_t rdtgroup_mode_write(struct kernfs_open_file *of, return ret ?: nbytes; } +/** + * rdtgroup_cbm_to_size - Translate CBM to size in bytes + * @r: RDT resource to which @d belongs. + * @d: RDT domain instance. + * @cbm: bitmask for which the size should be computed. + * + * The bitmask provided associated with the RDT domain instance @d will be + * translated into how many bytes it represents. The size in bytes is + * computed by first dividing the total cache size by the CBM length to + * determine how many bytes each bit in the bitmask represents. The result + * is multiplied with the number of bits set in the bitmask. + */ +unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, + struct rdt_domain *d, u32 cbm) +{ + struct cpu_cacheinfo *ci; + unsigned int size = 0; + int num_b, i; + + num_b = bitmap_weight((unsigned long *)&cbm, r->cache.cbm_len); + ci = get_cpu_cacheinfo(cpumask_any(&d->cpu_mask)); + for (i = 0; i < ci->num_leaves; i++) { + if (ci->info_list[i].level == r->cache_level) { + size = ci->info_list[i].size / r->cache.cbm_len * num_b; + break; + } + } + + return size; +} + +/** + * rdtgroup_size_show - Display size in bytes of allocated regions + * + * The "size" file mirrors the layout of the "schemata" file, printing the + * size in bytes of each region instead of the capacity bitmask. + * + */ +static int rdtgroup_size_show(struct kernfs_open_file *of, + struct seq_file *s, void *v) +{ + struct rdtgroup *rdtgrp; + struct rdt_resource *r; + struct rdt_domain *d; + unsigned int size; + bool sep = false; + u32 cbm; + + rdtgrp = rdtgroup_kn_lock_live(of->kn); + if (!rdtgrp) { + rdtgroup_kn_unlock(of->kn); + return -ENOENT; + } + + for_each_alloc_enabled_rdt_resource(r) { + seq_printf(s, "%*s:", max_name_width, r->name); + list_for_each_entry(d, &r->domains, list) { + if (sep) + seq_puts(s, ";"); + cbm = d->ctrl_val[rdtgrp->closid]; + size = rdtgroup_cbm_to_size(r, d, cbm); + seq_printf(s, "%d=%u", d->id, size); + sep = true; + } + seq_puts(s, "\n"); + } + + rdtgroup_kn_unlock(of->kn); + + return 0; +} + /* rdtgroup information files for one cache resource. */ static struct rftype res_common_files[] = { { @@ -1144,6 +1217,14 @@ static struct rftype res_common_files[] = { .seq_show = rdtgroup_mode_show, .fflags = RF_CTRL_BASE, }, + { + .name = "size", + .mode = 0444, + .kf_ops = &rdtgroup_kf_single_ops, + .seq_show = rdtgroup_size_show, + .fflags = RF_CTRL_BASE, + }, + }; static int rdtgroup_add_files(struct kernfs_node *kn, unsigned long fflags) -- 2.13.6