Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp469059pxj; Wed, 16 Jun 2021 06:40:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwg/u3ON17bSGu7whpcb1Uy4vvFGeBo43Y3uNreLkMOyLYu/ZCW0chjEP2H1Ynf00e+Phr2 X-Received: by 2002:a92:cf0a:: with SMTP id c10mr3799510ilo.97.1623850832725; Wed, 16 Jun 2021 06:40:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623850832; cv=none; d=google.com; s=arc-20160816; b=Ja9lNd6picSkJ/mYbUJweyU29CiCkdXL4BURm1D/6b/vvXjkASTmZt7Km0r2qlmQvm 6v+Y0Fx1BpJqOGKYZEIyMaXjgaL4FoTVQYkZoqaKuzkljyhtCf22FFRazMauPTrZyI4l nY8KP7AJBduZaaWerq+s6OBuefMxo3Z4LnYjo1Ja/5z5sk9IpKqKVu9Ga6mWwfRTkYXh k4fhm2Fyr+6Xdwi4iEzZlzLsk/iGSn1S196J9HH+szplaDxsuQANnu5GfAMWyvf+rJDA uLUDEGsaVMlM2IoI0SjFx9FV6o4wNKmKfnCzsQQGS67ZPJxncElU4FDBR6AKttDgqqyT Jb1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=0Wt/rec3R5xHSvq1kBNRku45IarA1AB9O46CK7Zy3q0=; b=Lmbh4wlLAN03i235noZvFBRyzXoVTD1Q4uM8lwtBVsJfG5GoerW1d/HXgyT72DWwOI u2mk2C9+iXkIv4+2Nu+opF6RsIfmiStie5aedFGA3mQVmZWcrHATVlriNMf20Kz+frhO TlIbCS0TgvSGd8hUmuxV49ptAXgSj/xxaKC+OTwEhXt/SZ+uUCeR4nyzL6Wkl5MIPc4s rj17N2itRpWQdB85RauZ6K+2hDKJ2gpWjrMUxSwJbPUdzzuejggwmTwJnfACforQCu3d rLu+DTQgoosocqTBXzRhblEwlVMh93vfPb185aUAwz3ORAs1ETWM6yKwQO8Mw8gZ7vle 45RQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=lHjFBSC2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g15si2669448jal.21.2021.06.16.06.40.19; Wed, 16 Jun 2021 06:40:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=lHjFBSC2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233097AbhFPM5B (ORCPT + 99 others); Wed, 16 Jun 2021 08:57:01 -0400 Received: from mail.kernel.org ([198.145.29.99]:57682 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232550AbhFPM5A (ORCPT ); Wed, 16 Jun 2021 08:57:00 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 69E3C61351; Wed, 16 Jun 2021 12:54:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623848094; bh=TiLQ/sViaRtSBZKqaPo8q5dK1gg9K5BIjrrfGAYjam4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=lHjFBSC2peCkC8qioEZI79+L0JphOrTq4U9Qn68n2D0Lyiv1wlOkXg3QMw9IBGaZQ AtoLt+avE+VYm0TYplIuD9GBIBH0gBNDRDGnUomy3CS+QiWOoWqyn9DcOKMPYJjiYd OoVKOqJAeIEgogy9fujydMMe+5DnkJPlprAkS05YcHD6X24uljA3WYjHjWZ6NiGZ8n gt61d8NfRZdaya5BqmoX/o74sLtr3lVzjFXu1k7KrTHcFSCB/EJC8qLYIj50DcBvtN z5vR4entjjrntUkqQvzgrfXaauP/Z8TfajlVp05Gxlfp9kYwz1KjFUsXFYcUJtvfzR 8Opl/Bd8XKDmA== Date: Wed, 16 Jun 2021 14:54:52 +0200 From: Frederic Weisbecker To: Peter Zijlstra Cc: "hasegawa-hitomi@fujitsu.com" , "'mingo@kernel.org'" , "'fweisbec@gmail.com'" , "'tglx@linutronix.de'" , "'juri.lelli@redhat.com'" , "'vincent.guittot@linaro.org'" , "'dietmar.eggemann@arm.com'" , "'rostedt@goodmis.org'" , "'bsegall@google.com'" , "'mgorman@suse.de'" , "'bristot@redhat.com'" , "'linux-kernel@vger.kernel.org'" Subject: Re: Utime and stime are less when getrusage (RUSAGE_THREAD) is executed on a tickless CPU. Message-ID: <20210616125452.GE801071@lothringen> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 19, 2021 at 11:24:58AM +0200, Peter Zijlstra wrote: > On Wed, May 19, 2021 at 06:30:36AM +0000, hasegawa-hitomi@fujitsu.com wrote: > > Hi Ingo, Peter, Juri, and Vincent. > > > > > > > Your email is malformed. > > > > I'm sorry. I was sent in the wrong format. I correct it and resend. > > Thank you, Peter, for pointing this out. > > > > > > I found that when I run getrusage(RUSAGE_THREAD) on a tickless CPU, > > the utime and stime I get are less than the actual time, unlike when I run > > getrusage(RUSAGE_SELF) on a single thread. > > This problem seems to be caused by the fact that se.sum_exec_runtime is not > > updated just before getting the information from 'current'. > > In the current implementation, task_cputime_adjusted() calls task_cputime() to > > get the 'current' utime and stime, then calls cputime_adjust() to adjust the > > sum of utime and stime to be equal to cputime.sum_exec_runtime. On a tickless > > CPU, sum_exec_runtime is not updated periodically, so there seems to be a > > discrepancy with the actual time. > > Therefore, I think I should include a process to update se.sum_exec_runtime > > just before getting the information from 'current' (as in other processes > > except RUSAGE_THREAD). I'm thinking of the following improvement. > > > > @@ void getrusage(struct task_struct *p, int who, struct rusage *r) > > if (who == RUSAGE_THREAD) { > > + task_sched_runtime(current); > > task_cputime_adjusted(current, &utime, &stime); > > > > Is there any possible problem with this? > > Would be superfluous for CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y > architectures at the very least. > > It also doesn't help any of the other callers, like for example procfs. > > Something like the below ought to work and fix all variants I think. But > it does make the call significantly more expensive. > > Looking at thread_group_cputime() that already does something like this, > but that's also susceptible to a variant of this very same issue; since > it doesn't call it unconditionally, nor on all tasks, so if current > isn't part of the threadgroup and/or another task is on a nohz_full cpu, > things will go wobbly again. > > There's a note about syscall performance there, so clearly someone seems > to care about that aspect of things, but it does suck for nohz_full. > > Frederic, didn't we have remote ticks that should help with this stuff? > > And mostly I think the trade-off here is that if you run on nohz_full, > you're not expected to go do syscalls anyway (because they're sodding > expensive) and hence the accuracy of these sort of things is mostly > irrelevant. > > So it might be the use-case is just fundamentally bonkers and we > shouldn't really bother fixing this. > > Anyway? > > --- > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c > index 872e481d5098..620871c8e4f8 100644 > --- a/kernel/sched/cputime.c > +++ b/kernel/sched/cputime.c > @@ -612,7 +612,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev, > void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st) > { > struct task_cputime cputime = { > - .sum_exec_runtime = p->se.sum_exec_runtime, > + .sum_exec_runtime = task_sched_runtime(p), > }; > > task_cputime(p, &cputime.utime, &cputime.stime); If necessary I guess we can do something like the below, which would only add the overhead where it's required: diff --git a/include/linux/sched/cputime.h b/include/linux/sched/cputime.h index 6c9f19a33865..ce3c58286062 100644 --- a/include/linux/sched/cputime.h +++ b/include/linux/sched/cputime.h @@ -18,15 +18,16 @@ #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */ #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN -extern void task_cputime(struct task_struct *t, +extern bool task_cputime(struct task_struct *t, u64 *utime, u64 *stime); extern u64 task_gtime(struct task_struct *t); #else -static inline void task_cputime(struct task_struct *t, +static inline bool task_cputime(struct task_struct *t, u64 *utime, u64 *stime) { *utime = t->utime; *stime = t->stime; + return false; } static inline u64 task_gtime(struct task_struct *t) diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index 872e481d5098..9392aea1804e 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -615,7 +615,8 @@ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st) .sum_exec_runtime = p->se.sum_exec_runtime, }; - task_cputime(p, &cputime.utime, &cputime.stime); + if (task_cputime(p, &cputime.utime, &cputime.stime)) + cputime.sum_exec_runtime = task_sched_runtime(p); cputime_adjust(&cputime, &p->prev_cputime, ut, st); } EXPORT_SYMBOL_GPL(task_cputime_adjusted); @@ -828,19 +829,21 @@ u64 task_gtime(struct task_struct *t) * add up the pending nohz execution time since the last * cputime snapshot. */ -void task_cputime(struct task_struct *t, u64 *utime, u64 *stime) +bool task_cputime(struct task_struct *t, u64 *utime, u64 *stime) { struct vtime *vtime = &t->vtime; unsigned int seq; u64 delta; + int ret; if (!vtime_accounting_enabled()) { *utime = t->utime; *stime = t->stime; - return; + return false; } do { + ret = false; seq = read_seqcount_begin(&vtime->seqcount); *utime = t->utime; @@ -850,6 +853,7 @@ void task_cputime(struct task_struct *t, u64 *utime, u64 *stime) if (vtime->state < VTIME_SYS) continue; + ret = true; delta = vtime_delta(vtime); /* @@ -861,6 +865,8 @@ void task_cputime(struct task_struct *t, u64 *utime, u64 *stime) else *utime += vtime->utime + delta; } while (read_seqcount_retry(&vtime->seqcount, seq)); + + return ret; } static int vtime_state_fetch(struct vtime *vtime, int cpu)