Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp1195004ybl; Wed, 8 Jan 2020 12:43:39 -0800 (PST) X-Google-Smtp-Source: APXvYqyb0GwYs6e+IDboyV5XMHt9e038ofNXk59PM9Qpfn9wG2hkO/sv3ztq8TBQ//yGVP1urqaV X-Received: by 2002:aca:54cc:: with SMTP id i195mr397057oib.126.1578516219749; Wed, 08 Jan 2020 12:43:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578516219; cv=none; d=google.com; s=arc-20160816; b=DkCz1FW7TSrfF26dEP71nXkwhEogGx+HIHkWdXcxJFICu1JUQ7sMCpzsNKikUfXAeQ lQ1fqZEQBkV1wfP0VeQgirrSqNgUYrves1T84/W7+ZA66P+4MfnnvbrjYl3Wy3ITqUiF S7qiBITzrhhDkoCnkREwBqfQklegFYXwHtGbQE+XL+yTKs6Iwid5dTBLOf6iR+6A1peo RlC5ZqOK+ZRVAptKLcN4T1VEoDFJbsiMhoHCXkTo5PecE/6tMYpVfjX4qP1ibub/IYXe 22Y6rcJl3zvAw8JIWv6KUCQcHDCa7QsZVvzx6lXvv/Hl625on89lgAip82VjeDaGUB6H cdcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=/wGucE5sxl4CrO7rf6UbHd9rGwElsZvq62EFf5c73og=; b=GaHHGAgtkTRj/t0D+rsgDAeYyKo9tThZsammkS4SwLvAZ7j1UD+seUgHmX9Le9HVLn T69Z/jsSRuPdMtBbgArjXWEN+S8m/ZQjRFn+8fNBE6/t8vvpayMl8vot7uvH2t6IeB9D H2KDcS1elIYQye+gfIFhmK7GG7G9rUJW9m1M2EKMETXdX1p1c0UJuJkXQ1090zgZBoGA 363j8Xi8HjhehtT7lftK6wPayHWvOFj1p1K9Ae628YJJwtZZhczMzkwUy/ZgaZXx5hIs 7J2iyivFJuMLw6CBhg0z8CuLj2Cnb3EMgrixvCDGACGm1CzqNynZR4FUfXtm5Hm1vUbW dpWg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 14si2246262oie.181.2020.01.08.12.43.24; Wed, 08 Jan 2020 12:43:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727095AbgAHUmU (ORCPT + 99 others); Wed, 8 Jan 2020 15:42:20 -0500 Received: from mga14.intel.com ([192.55.52.115]:42991 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726426AbgAHUmU (ORCPT ); Wed, 8 Jan 2020 15:42:20 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Jan 2020 12:42:20 -0800 X-IronPort-AV: E=Sophos;i="5.69,411,1571727600"; d="scan'208";a="223036740" Received: from rchatre-mobl.amr.corp.intel.com (HELO [10.24.14.130]) ([10.24.14.130]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 08 Jan 2020 12:42:20 -0800 Subject: Re: [bug report] resctrl high memory comsumption To: Fenghua Yu , Shakeel Butt Cc: Borislav Petkov , LKML , Thomas Gleixner , Ingo Molnar , x86@kernel.org References: <20200108202311.GA40461@romley-ivt3.sc.intel.com> From: Reinette Chatre Message-ID: Date: Wed, 8 Jan 2020 12:42:17 -0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.2.1 MIME-Version: 1.0 In-Reply-To: <20200108202311.GA40461@romley-ivt3.sc.intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Fenghua, On 1/8/2020 12:23 PM, Fenghua Yu wrote: > On Wed, Jan 08, 2020 at 09:07:41AM -0800, Shakeel Butt wrote: >> Hi, >> >> Recently we had a bug in the system software writing the same pids to >> the tasks file of resctrl group multiple times. The resctrl code >> allocates "struct task_move_callback" for each such write and call >> task_work_add() for that task to handle it on return to user-space >> without checking if such request already exist for that particular >> task. The issue arises for long sleeping tasks which has thousands for >> such request queued to be handled. On our production, we notice >> thousands of tasks having thousands of such requests and taking GiBs >> of memory for "struct task_move_callback". I am not very familiar with >> the code to judge if task_work_cancel() is the right approach or just >> checking closid/rmid before doing task_work_add(). >> > > Thank you for reporting the issue, Shakeel! > > Could you please check if the following patch fixes the issue? > From 3c23c39b6a44fdfbbbe0083d074dcc114d7d7f1c Mon Sep 17 00:00:00 2001 > From: Fenghua Yu > Date: Wed, 8 Jan 2020 19:53:33 +0000 > Subject: [RFC PATCH] x86/resctrl: Fix redundant task movements > > Currently a task can be moved to a rdtgroup multiple times. > But, this can cause multiple task works are added, waste memory > and degrade performance. > > To fix the issue, only move the task to a rdtgroup when the task > is not in the rdgroup. Don't try to move the task to the rdtgroup > again when the task is already in the rdtgroup. > > Reported-by: Shakeel Butt > Signed-off-by: Fenghua Yu > --- > arch/x86/kernel/cpu/resctrl/rdtgroup.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c > index 2e3b06d6bbc6..75300c4a5969 100644 > --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c > +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c > @@ -546,6 +546,17 @@ static int __rdtgroup_move_task(struct task_struct *tsk, > struct task_move_callback *callback; > int ret; > > + /* If the task is already in rdtgrp, don't move the task. */ > + if ((rdtgrp->type == RDTCTRL_GROUP && tsk->closid == rdtgrp->closid && > + tsk->rmid == rdtgrp->mon.rmid) || > + (rdtgrp->type == RDTMON_GROUP && > + rdtgrp->mon.parent->closid == tsk->closid && > + tsk->rmid == rdtgrp->mon.rmid)) { > + rdt_last_cmd_puts("Task is already in the rdgroup\n"); > + > + return -EINVAL; > + } > + > callback = kzalloc(sizeof(*callback), GFP_KERNEL); > if (!callback) > return -ENOMEM; > I think your fix would address this specific use case but a slightly different use case will still encounter the problem of high memory consumption. If for example, sleeping tasks are moved (many times) between resource or monitoring groups then their task_works queue would just keep growing. It seems that a call to task_work_cancel() before adding a new work item should address all these cases? Reinette