Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp2771154imj; Mon, 11 Feb 2019 08:13:13 -0800 (PST) X-Google-Smtp-Source: AHgI3IbdLzvGFJwOoXphOdvGCh04B4hueA//29UMI4RTMlfbZyL6Sq+fP/6jXGf80lZrbPNg7kzp X-Received: by 2002:aa7:8a17:: with SMTP id m23mr25584212pfa.258.1549901593126; Mon, 11 Feb 2019 08:13:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549901593; cv=none; d=google.com; s=arc-20160816; b=d1yVj5Voj/E+DVrcK2+cyfyYs3QtbAKrFH4In9Yctb9TUinhFy1tedMM40XtZDEBrY 8QahS8zv35oHBpwDywdUvheoI6uuUjiPVJaB89xra16NfTZCAZYArprKllM4xcKd/IDW 3iqkDXaxlj1Nek2OOi4ZHGUtZpg6r/5nwp6oCi2W6Ld0mqf1jeyvGQKvQS90SILON/Nr nYvM8UO07eS0rEbsKb++G6sVxftwss7Z4LpJORPqhEnIh4J8+i6+/xUKGJN9fxEhMjCW b7iHZv0cRgcdrt3q2OLbjoP+wtnt0GZPxx3PVa8CHy2uD2pSa5WYBZl7g47Qfo0uIQ4K ACEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Z3uyA9Uz5ee6/e95jmp6s0zh/bM/D4mr6jIaYL7867I=; b=u/LbaTEg4/HmeciFmxRfKYYSAdFkSuJR+w37WbB4wO7gW1uWzRFQeCUoYfby1itpxH FBjvpOknlebbdYqSEeQ0wh1JRAsi7uWBo9KCz4YZin1rSy7+gLAwn2XuQpEUp/rcK4rd 4MUbhiRCi56aDZM1COA1rKX/sW41addzj8bCM2ydF45CcV2/AjNQPFSmceD3ytWzkWts pVO5uLzacaYxiujtpx+qfyFhBRR/4MugjFehENIJCh9me9MDrghDrdVMCGeJXuzfDQJ7 Rugdy63gb3G8kdNDToX24w5b4egKZJwE+hz25fA5wvrVCfMvYwGNwQUnsMpnz2zbrDQA Hc0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=NxYj5WDs; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b2si9109516pgq.275.2019.02.11.08.12.50; Mon, 11 Feb 2019 08:13:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=NxYj5WDs; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729025AbfBKOXg (ORCPT + 99 others); Mon, 11 Feb 2019 09:23:36 -0500 Received: from mail.kernel.org ([198.145.29.99]:56592 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728123AbfBKOXd (ORCPT ); Mon, 11 Feb 2019 09:23:33 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 29E5F20844; Mon, 11 Feb 2019 14:23:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1549895011; bh=JuRwckUEgE8Xx66s83D/oR+WYpSvWrsv694SpzGZOnM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NxYj5WDsRQ9sgOqEXl0FH2dB6efF/2JPKCWGLRTQnoPIzfAPJLBB/2HyWUgB8nai7 UGMsVrsDIIDiCVNvxSEJpM3kKsIcehYG8ocYX/fCDvmYvrbgf+EbtW4ovIqSD3AnMF m4+OCPuuhypCEaz0OtUIFE57Gf4jhXUkpsYSPC3k= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Babu Moger , Borislav Petkov , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Brijesh Singh , "Chang S. Bae" , David Miller , David Woodhouse , Dmitry Safonov , Fenghua Yu , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Joerg Roedel , Jonathan Corbet , Josh Poimboeuf , Kate Stewart , "Kirill A. Shutemov" , linux-doc@vger.kernel.org, Mauro Carvalho Chehab , Paolo Bonzini , Peter Zijlstra , Philippe Ombredanne , Pu Wen , qianyue.zj@alibaba-inc.com, "Rafael J. Wysocki" , Reinette Chatre , Rian Hunter , Sherry Hurwitz , Suravee Suthikulpanit , Thomas Gleixner , Thomas Lendacky , Tony Luck , Vitaly Kuznetsov , xiaochen.shen@intel.com, Sasha Levin Subject: [PATCH 4.20 062/352] x86/resctrl: Fixup the user-visible strings Date: Mon, 11 Feb 2019 15:14:49 +0100 Message-Id: <20190211141850.082586785@linuxfoundation.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190211141846.543045703@linuxfoundation.org> References: <20190211141846.543045703@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.20-stable review patch. If anyone has any objections, please let me know. ------------------ [ Upstream commit 723f1a0dd8e26a7523ba068204bee11c95ded38d ] Fix the messages in rdt_last_cmd_printf() and rdt_last_cmd_puts() to make them more meaningful and consistent. [ bp: s/cpu/CPU/; s/mem\W/memory ] Signed-off-by: Babu Moger Signed-off-by: Borislav Petkov Cc: Andrew Morton Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Brijesh Singh Cc: "Chang S. Bae" Cc: David Miller Cc: David Woodhouse Cc: Dmitry Safonov Cc: Fenghua Yu Cc: Greg Kroah-Hartman Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jann Horn Cc: Joerg Roedel Cc: Jonathan Corbet Cc: Josh Poimboeuf Cc: Kate Stewart Cc: "Kirill A. Shutemov" Cc: Cc: Mauro Carvalho Chehab Cc: Paolo Bonzini Cc: Peter Zijlstra Cc: Philippe Ombredanne Cc: Pu Wen Cc: Cc: "Rafael J. Wysocki" Cc: Reinette Chatre Cc: Rian Hunter Cc: Sherry Hurwitz Cc: Suravee Suthikulpanit Cc: Thomas Gleixner Cc: Thomas Lendacky Cc: Tony Luck Cc: Vitaly Kuznetsov Cc: Link: https://lkml.kernel.org/r/20181121202811.4492-11-babu.moger@amd.com Signed-off-by: Sasha Levin --- arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c | 22 ++++++------- arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c | 34 +++++++++---------- arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 36 ++++++++++----------- 3 files changed, 46 insertions(+), 46 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c b/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c index c8b72aff55e0..6e76ada71211 100644 --- a/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c +++ b/arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c @@ -71,7 +71,7 @@ int parse_bw(struct rdt_parse_data *data, struct rdt_resource *r, unsigned long bw_val; if (d->have_new_ctrl) { - rdt_last_cmd_printf("duplicate domain %d\n", d->id); + rdt_last_cmd_printf("Duplicate domain %d\n", d->id); return -EINVAL; } @@ -97,12 +97,12 @@ static bool cbm_validate(char *buf, u32 *data, struct rdt_resource *r) ret = kstrtoul(buf, 16, &val); if (ret) { - rdt_last_cmd_printf("non-hex character in mask %s\n", buf); + rdt_last_cmd_printf("Non-hex character in the mask %s\n", buf); return false; } if (val == 0 || val > r->default_ctrl) { - rdt_last_cmd_puts("mask out of range\n"); + rdt_last_cmd_puts("Mask out of range\n"); return false; } @@ -110,12 +110,12 @@ static bool cbm_validate(char *buf, u32 *data, struct rdt_resource *r) zero_bit = find_next_zero_bit(&val, cbm_len, first_bit); if (find_next_bit(&val, cbm_len, zero_bit) < cbm_len) { - rdt_last_cmd_printf("mask %lx has non-consecutive 1-bits\n", val); + rdt_last_cmd_printf("The mask %lx has non-consecutive 1-bits\n", val); return false; } if ((zero_bit - first_bit) < r->cache.min_cbm_bits) { - rdt_last_cmd_printf("Need at least %d bits in mask\n", + rdt_last_cmd_printf("Need at least %d bits in the mask\n", r->cache.min_cbm_bits); return false; } @@ -135,7 +135,7 @@ int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r, u32 cbm_val; if (d->have_new_ctrl) { - rdt_last_cmd_printf("duplicate domain %d\n", d->id); + rdt_last_cmd_printf("Duplicate domain %d\n", d->id); return -EINVAL; } @@ -145,7 +145,7 @@ int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r, */ if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP && rdtgroup_pseudo_locked_in_hierarchy(d)) { - rdt_last_cmd_printf("pseudo-locked region in hierarchy\n"); + rdt_last_cmd_printf("Pseudo-locked region in hierarchy\n"); return -EINVAL; } @@ -164,14 +164,14 @@ int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r, * either is exclusive. */ if (rdtgroup_cbm_overlaps(r, d, cbm_val, rdtgrp->closid, true)) { - rdt_last_cmd_printf("overlaps with exclusive group\n"); + rdt_last_cmd_printf("Overlaps with exclusive group\n"); return -EINVAL; } if (rdtgroup_cbm_overlaps(r, d, cbm_val, rdtgrp->closid, false)) { if (rdtgrp->mode == RDT_MODE_EXCLUSIVE || rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { - rdt_last_cmd_printf("overlaps with other group\n"); + rdt_last_cmd_printf("0verlaps with other group\n"); return -EINVAL; } } @@ -293,7 +293,7 @@ static int rdtgroup_parse_resource(char *resname, char *tok, if (!strcmp(resname, r->name) && rdtgrp->closid < r->num_closid) return parse_line(tok, r, rdtgrp); } - rdt_last_cmd_printf("unknown/unsupported resource name '%s'\n", resname); + rdt_last_cmd_printf("Unknown or unsupported resource name '%s'\n", resname); return -EINVAL; } @@ -326,7 +326,7 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, */ if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { ret = -EINVAL; - rdt_last_cmd_puts("resource group is pseudo-locked\n"); + rdt_last_cmd_puts("Resource group is pseudo-locked\n"); goto out; } diff --git a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c index 815b4e92522c..cde746d64600 100644 --- a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c +++ b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c @@ -213,7 +213,7 @@ static int pseudo_lock_cstates_constrain(struct pseudo_lock_region *plr) for_each_cpu(cpu, &plr->d->cpu_mask) { pm_req = kzalloc(sizeof(*pm_req), GFP_KERNEL); if (!pm_req) { - rdt_last_cmd_puts("fail allocating mem for PM QoS\n"); + rdt_last_cmd_puts("Failure to allocate memory for PM QoS\n"); ret = -ENOMEM; goto out_err; } @@ -222,7 +222,7 @@ static int pseudo_lock_cstates_constrain(struct pseudo_lock_region *plr) DEV_PM_QOS_RESUME_LATENCY, 30); if (ret < 0) { - rdt_last_cmd_printf("fail to add latency req cpu%d\n", + rdt_last_cmd_printf("Failed to add latency req CPU%d\n", cpu); kfree(pm_req); ret = -1; @@ -289,7 +289,7 @@ static int pseudo_lock_region_init(struct pseudo_lock_region *plr) plr->cpu = cpumask_first(&plr->d->cpu_mask); if (!cpu_online(plr->cpu)) { - rdt_last_cmd_printf("cpu %u associated with cache not online\n", + rdt_last_cmd_printf("CPU %u associated with cache not online\n", plr->cpu); ret = -ENODEV; goto out_region; @@ -307,7 +307,7 @@ static int pseudo_lock_region_init(struct pseudo_lock_region *plr) } ret = -1; - rdt_last_cmd_puts("unable to determine cache line size\n"); + rdt_last_cmd_puts("Unable to determine cache line size\n"); out_region: pseudo_lock_region_clear(plr); return ret; @@ -361,14 +361,14 @@ static int pseudo_lock_region_alloc(struct pseudo_lock_region *plr) * KMALLOC_MAX_SIZE. */ if (plr->size > KMALLOC_MAX_SIZE) { - rdt_last_cmd_puts("requested region exceeds maximum size\n"); + rdt_last_cmd_puts("Requested region exceeds maximum size\n"); ret = -E2BIG; goto out_region; } plr->kmem = kzalloc(plr->size, GFP_KERNEL); if (!plr->kmem) { - rdt_last_cmd_puts("unable to allocate memory\n"); + rdt_last_cmd_puts("Unable to allocate memory\n"); ret = -ENOMEM; goto out_region; } @@ -665,7 +665,7 @@ int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp) * default closid associated with it. */ if (rdtgrp == &rdtgroup_default) { - rdt_last_cmd_puts("cannot pseudo-lock default group\n"); + rdt_last_cmd_puts("Cannot pseudo-lock default group\n"); return -EINVAL; } @@ -707,17 +707,17 @@ int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp) */ prefetch_disable_bits = get_prefetch_disable_bits(); if (prefetch_disable_bits == 0) { - rdt_last_cmd_puts("pseudo-locking not supported\n"); + rdt_last_cmd_puts("Pseudo-locking not supported\n"); return -EINVAL; } if (rdtgroup_monitor_in_progress(rdtgrp)) { - rdt_last_cmd_puts("monitoring in progress\n"); + rdt_last_cmd_puts("Monitoring in progress\n"); return -EINVAL; } if (rdtgroup_tasks_assigned(rdtgrp)) { - rdt_last_cmd_puts("tasks assigned to resource group\n"); + rdt_last_cmd_puts("Tasks assigned to resource group\n"); return -EINVAL; } @@ -727,13 +727,13 @@ int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp) } if (rdtgroup_locksetup_user_restrict(rdtgrp)) { - rdt_last_cmd_puts("unable to modify resctrl permissions\n"); + rdt_last_cmd_puts("Unable to modify resctrl permissions\n"); return -EIO; } ret = pseudo_lock_init(rdtgrp); if (ret) { - rdt_last_cmd_puts("unable to init pseudo-lock region\n"); + rdt_last_cmd_puts("Unable to init pseudo-lock region\n"); goto out_release; } @@ -770,7 +770,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp) if (rdt_mon_capable) { ret = alloc_rmid(); if (ret < 0) { - rdt_last_cmd_puts("out of RMIDs\n"); + rdt_last_cmd_puts("Out of RMIDs\n"); return ret; } rdtgrp->mon.rmid = ret; @@ -1304,7 +1304,7 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) "pseudo_lock/%u", plr->cpu); if (IS_ERR(thread)) { ret = PTR_ERR(thread); - rdt_last_cmd_printf("locking thread returned error %d\n", ret); + rdt_last_cmd_printf("Locking thread returned error %d\n", ret); goto out_cstates; } @@ -1322,13 +1322,13 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) * the cleared, but not freed, plr struct resulting in an * empty pseudo-locking loop. */ - rdt_last_cmd_puts("locking thread interrupted\n"); + rdt_last_cmd_puts("Locking thread interrupted\n"); goto out_cstates; } ret = pseudo_lock_minor_get(&new_minor); if (ret < 0) { - rdt_last_cmd_puts("unable to obtain a new minor number\n"); + rdt_last_cmd_puts("Unable to obtain a new minor number\n"); goto out_cstates; } @@ -1360,7 +1360,7 @@ int rdtgroup_pseudo_lock_create(struct rdtgroup *rdtgrp) if (IS_ERR(dev)) { ret = PTR_ERR(dev); - rdt_last_cmd_printf("failed to create character device: %d\n", + rdt_last_cmd_printf("Failed to create character device: %d\n", ret); goto out_debugfs; } diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c index 951c61367688..17b63a4748bf 100644 --- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c +++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c @@ -345,7 +345,7 @@ static int cpus_mon_write(struct rdtgroup *rdtgrp, cpumask_var_t newmask, /* Check whether cpus belong to parent ctrl group */ cpumask_andnot(tmpmask, newmask, &prgrp->cpu_mask); if (cpumask_weight(tmpmask)) { - rdt_last_cmd_puts("can only add CPUs to mongroup that belong to parent\n"); + rdt_last_cmd_puts("Can only add CPUs to mongroup that belong to parent\n"); return -EINVAL; } @@ -470,14 +470,14 @@ static ssize_t rdtgroup_cpus_write(struct kernfs_open_file *of, rdt_last_cmd_clear(); if (!rdtgrp) { ret = -ENOENT; - rdt_last_cmd_puts("directory was removed\n"); + rdt_last_cmd_puts("Directory was removed\n"); goto unlock; } if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED || rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { ret = -EINVAL; - rdt_last_cmd_puts("pseudo-locking in progress\n"); + rdt_last_cmd_puts("Pseudo-locking in progress\n"); goto unlock; } @@ -487,7 +487,7 @@ static ssize_t rdtgroup_cpus_write(struct kernfs_open_file *of, ret = cpumask_parse(buf, newmask); if (ret) { - rdt_last_cmd_puts("bad cpu list/mask\n"); + rdt_last_cmd_puts("Bad CPU list/mask\n"); goto unlock; } @@ -495,7 +495,7 @@ static ssize_t rdtgroup_cpus_write(struct kernfs_open_file *of, cpumask_andnot(tmpmask, newmask, cpu_online_mask); if (cpumask_weight(tmpmask)) { ret = -EINVAL; - rdt_last_cmd_puts("can only assign online cpus\n"); + rdt_last_cmd_puts("Can only assign online CPUs\n"); goto unlock; } @@ -574,7 +574,7 @@ static int __rdtgroup_move_task(struct task_struct *tsk, */ atomic_dec(&rdtgrp->waitcount); kfree(callback); - rdt_last_cmd_puts("task exited\n"); + rdt_last_cmd_puts("Task exited\n"); } else { /* * For ctrl_mon groups move both closid and rmid. @@ -692,7 +692,7 @@ static ssize_t rdtgroup_tasks_write(struct kernfs_open_file *of, if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED || rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { ret = -EINVAL; - rdt_last_cmd_puts("pseudo-locking in progress\n"); + rdt_last_cmd_puts("Pseudo-locking in progress\n"); goto unlock; } @@ -1158,14 +1158,14 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp) list_for_each_entry(d, &r->domains, list) { if (rdtgroup_cbm_overlaps(r, d, d->ctrl_val[closid], rdtgrp->closid, false)) { - rdt_last_cmd_puts("schemata overlaps\n"); + rdt_last_cmd_puts("Schemata overlaps\n"); return false; } } } if (!has_cache) { - rdt_last_cmd_puts("cannot be exclusive without CAT/CDP\n"); + rdt_last_cmd_puts("Cannot be exclusive without CAT/CDP\n"); return false; } @@ -1206,7 +1206,7 @@ static ssize_t rdtgroup_mode_write(struct kernfs_open_file *of, goto out; if (mode == RDT_MODE_PSEUDO_LOCKED) { - rdt_last_cmd_printf("cannot change pseudo-locked group\n"); + rdt_last_cmd_printf("Cannot change pseudo-locked group\n"); ret = -EINVAL; goto out; } @@ -1235,7 +1235,7 @@ static ssize_t rdtgroup_mode_write(struct kernfs_open_file *of, goto out; rdtgrp->mode = RDT_MODE_PSEUDO_LOCKSETUP; } else { - rdt_last_cmd_printf("unknown/unsupported mode\n"); + rdt_last_cmd_printf("Unknown orunsupported mode\n"); ret = -EINVAL; } @@ -2540,7 +2540,7 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) tmp_cbm = d->new_ctrl; if (bitmap_weight(&tmp_cbm, r->cache.cbm_len) < r->cache.min_cbm_bits) { - rdt_last_cmd_printf("no space on %s:%d\n", + rdt_last_cmd_printf("No space on %s:%d\n", r->name, d->id); return -ENOSPC; } @@ -2557,7 +2557,7 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) continue; ret = update_domains(r, rdtgrp->closid); if (ret < 0) { - rdt_last_cmd_puts("failed to initialize allocations\n"); + rdt_last_cmd_puts("Failed to initialize allocations\n"); return ret; } rdtgrp->mode = RDT_MODE_SHAREABLE; @@ -2580,7 +2580,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, rdt_last_cmd_clear(); if (!prdtgrp) { ret = -ENODEV; - rdt_last_cmd_puts("directory was removed\n"); + rdt_last_cmd_puts("Directory was removed\n"); goto out_unlock; } @@ -2588,7 +2588,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, (prdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP || prdtgrp->mode == RDT_MODE_PSEUDO_LOCKED)) { ret = -EINVAL; - rdt_last_cmd_puts("pseudo-locking in progress\n"); + rdt_last_cmd_puts("Pseudo-locking in progress\n"); goto out_unlock; } @@ -2596,7 +2596,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, rdtgrp = kzalloc(sizeof(*rdtgrp), GFP_KERNEL); if (!rdtgrp) { ret = -ENOSPC; - rdt_last_cmd_puts("kernel out of memory\n"); + rdt_last_cmd_puts("Kernel out of memory\n"); goto out_unlock; } *r = rdtgrp; @@ -2637,7 +2637,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn, if (rdt_mon_capable) { ret = alloc_rmid(); if (ret < 0) { - rdt_last_cmd_puts("out of RMIDs\n"); + rdt_last_cmd_puts("Out of RMIDs\n"); goto out_destroy; } rdtgrp->mon.rmid = ret; @@ -2725,7 +2725,7 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn, kn = rdtgrp->kn; ret = closid_alloc(); if (ret < 0) { - rdt_last_cmd_puts("out of CLOSIDs\n"); + rdt_last_cmd_puts("Out of CLOSIDs\n"); goto out_common_fail; } closid = ret; -- 2.19.1