Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932741AbcDKN0P (ORCPT ); Mon, 11 Apr 2016 09:26:15 -0400 Received: from mx2.suse.de ([195.135.220.15]:37742 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932100AbcDKNYX (ORCPT ); Mon, 11 Apr 2016 09:24:23 -0400 X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References" From: Jiri Slaby To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , Frederic Weisbecker , Glauber Costa , Linus Torvalds , Peter Zijlstra , Ingo Molnar , Jiri Slaby Subject: [PATCH 3.12 96/98] sched/cputime: Fix steal time accounting vs. CPU hotplug Date: Mon, 11 Apr 2016 15:23:38 +0200 Message-Id: <05a73729b97e05d9356a01d8ab940633a3d2005f.1460380917.git.jslaby@suse.cz> X-Mailer: git-send-email 2.8.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3019 Lines: 85 From: Thomas Gleixner 3.12-stable review patch. If anyone has any objections, please let me know. =============== commit e9532e69b8d1d1284e8ecf8d2586de34aec61244 upstream. On CPU hotplug the steal time accounting can keep a stale rq->prev_steal_time value over CPU down and up. So after the CPU comes up again the delta calculation in steal_account_process_tick() wreckages itself due to the unsigned math: u64 steal = paravirt_steal_clock(smp_processor_id()); steal -= this_rq()->prev_steal_time; So if steal is smaller than rq->prev_steal_time we end up with an insane large value which then gets added to rq->prev_steal_time, resulting in a permanent wreckage of the accounting. As a consequence the per CPU stats in /proc/stat become stale. Nice trick to tell the world how idle the system is (100%) while the CPU is 100% busy running tasks. Though we prefer realistic numbers. None of the accounting values which use a previous value to account for fractions is reset at CPU hotplug time. update_rq_clock_task() has a sanity check for prev_irq_time and prev_steal_time_rq, but that sanity check solely deals with clock warps and limits the /proc/stat visible wreckage. The prev_time values are still wrong. Solution is simple: Reset rq->prev_*_time when the CPU is plugged in again. Signed-off-by: Thomas Gleixner Acked-by: Rik van Riel Cc: Frederic Weisbecker Cc: Glauber Costa Cc: Linus Torvalds Cc: Peter Zijlstra Fixes: commit 095c0aa83e52 "sched: adjust scheduler cpu power for stolen time" Fixes: commit aa483808516c "sched: Remove irq time from available CPU power" Fixes: commit e6e6685accfa "KVM guest: Steal time accounting" Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1603041539490.3686@nanos Signed-off-by: Ingo Molnar Signed-off-by: Jiri Slaby --- kernel/sched/core.c | 1 + kernel/sched/sched.h | 13 +++++++++++++ 2 files changed, 14 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7381119ec1e9..dd794a9b6850 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4710,6 +4710,7 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu) case CPU_UP_PREPARE: rq->calc_load_update = calc_load_update; + account_reset_rq(rq); break; case CPU_ONLINE: diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e09e3e0466f7..2f761b74dee3 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1383,3 +1383,16 @@ static inline u64 irq_time_read(int cpu) } #endif /* CONFIG_64BIT */ #endif /* CONFIG_IRQ_TIME_ACCOUNTING */ + +static inline void account_reset_rq(struct rq *rq) +{ +#ifdef CONFIG_IRQ_TIME_ACCOUNTING + rq->prev_irq_time = 0; +#endif +#ifdef CONFIG_PARAVIRT + rq->prev_steal_time = 0; +#endif +#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING + rq->prev_steal_time_rq = 0; +#endif +} -- 2.8.1