Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1771120ybz; Sat, 2 May 2020 07:27:20 -0700 (PDT) X-Google-Smtp-Source: APiQypIa+ExFGWAWPfSYluTrP7FzKxjG+59T/82r4G3zLcoNScX2ycKjqAA5lZJJPGfzpD5D51eZ X-Received: by 2002:a05:6402:698:: with SMTP id f24mr7469808edy.260.1588429640575; Sat, 02 May 2020 07:27:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588429640; cv=none; d=google.com; s=arc-20160816; b=wxEk9CTvPT1kCP0w+pg/7EX/Ydi/qqvFgoPpkkNkFcX3e89QWwV8bZvIcYQYVGSKDf PGcjO55o5HIMUG+8l56oCiqkdodx27Hrk1a9r5gmdSmCTIxHgTKl9cDlhKoIcELe+ziw YxvXKcq3DevkhYDAtwLBY2SdKN8w7jm+1n+lI92UDkPc/5rohDVTTG/VgNHdKKqmmtjM xYo/MOrmE+/se/uAi22ef51FWb6VbwmcBZh2V2UuSA0ueKjk4EILGhjAIN45p8yijqxR cXi0+P4MT9UC+WYDUfZeOzcgVHBDleL9HEv22Ciputw+elM4oeWIEaru5XJ+8mth5ICm Tbpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:date:cc:to:from:subject:message-id; bh=6wq+GxZSq/QxWgiXnStoJkl4lhPTC9CCUWMzC7u/hN8=; b=LzhydthtvrGkxQCu2M/rNuMFMokJbsFRL5aF2LM83HhH015w31WIImyDC7HDtT5ngv DTz9YNLsesZ3BRMTERkpzUnzo2IQCPswbZ1ECvDAuCwGkxGx9FegqlTaaUfZ04GXL5Gg LEeEb9QEX/1IWYgZoeqGwFs8qQuJE3TeXMYXA9joCvS3hjVhwsKgpJRW8ux/YQa0wHKV kKi0VLj8mToYL2R9um3bJpACekwuHtzWo24cQhve0nNEqdepXQfBx/l6+1QeIpiAPNKm XekL9OAzgv0nWW2ZUDoKtQjdZ/DNeGGrlUlEhc4aagNlTrMytAdKnExKZ1+ekKjgVSk4 NEjw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id aq26si3351399ejc.26.2020.05.02.07.26.56; Sat, 02 May 2020 07:27:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728065AbgEBOZF (ORCPT + 99 others); Sat, 2 May 2020 10:25:05 -0400 Received: from mx2.suse.de ([195.135.220.15]:34120 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728020AbgEBOZF (ORCPT ); Sat, 2 May 2020 10:25:05 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 35938B019; Sat, 2 May 2020 14:25:03 +0000 (UTC) Message-ID: <1588429500.8505.29.camel@suse.cz> Subject: Re: [PATCH 1/2] x86, sched: Prevent divisions by zero in frequency invariant accounting From: Giovanni Gherdovich To: Peter Zijlstra Cc: Srinivas Pandruvada , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Len Brown , "Rafael J . Wysocki" , x86@kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, Ricardo Neri , Linus Torvalds Date: Sat, 02 May 2020 16:25:00 +0200 In-Reply-To: <20200501133042.GE3762@hirez.programming.kicks-ass.net> References: <20200428132450.24901-1-ggherdovich@suse.cz> <20200428132450.24901-2-ggherdovich@suse.cz> <20200501133042.GE3762@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.26.6 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2020-05-01 at 15:30 +0200, Peter Zijlstra wrote: > On Tue, Apr 28, 2020 at 03:24:49PM +0200, Giovanni Gherdovich wrote: > > The product mcnt * arch_max_freq_ratio could be zero if it overflows u64. > > > > For context, a large value for arch_max_freq_ratio would be 5000, > > corresponding to a turbo_freq/base_freq ratio of 5 (normally it's more like > > 1500-2000). A large increment frequency for the MPERF counter would be 5GHz > > (the base clock of all CPUs on the market today is less than that). With > > these figures, a CPU would need to go without a scheduler tick for around 8 > > days for the u64 overflow to happen. It is unlikely, but the check is > > warranted. > > > > In that case it's also appropriate to disable frequency invariant > > accounting: the feature relies on measures of the clock frequency done at > > every scheduler tick, which need to be "fresh" to be at all meaningful. > > > > Signed-off-by: Giovanni Gherdovich > > Fixes: 1567c3e3467c ("x86, sched: Add support for frequency invariance") > > acnt <<= 2*SCHED_CAPACITY_SHIFT; > > mcnt *= arch_max_freq_ratio; > > + if (!mcnt) { > > The problem is; this doesn't do what you claim it does. > > > + pr_warn("Scheduler tick missing for long time, disabling scale-invariant accounting.\n"); > > + /* static_branch_disable() acquires a lock and may sleep */ > > + schedule_work(&disable_freq_invariance_work); > > + return; > > + } > > > > freq_scale = div64_u64(acnt, mcnt); > > I've changed the patch like so.. OK? > > (ok, perhaps I went a little overboard with the paranoia ;-) Right, I wasn't really checking for overflow, only for when the product "mcnt * arch_max_freq_ratio" becomes zero. Thanks for your edit (I took note of the macros check_*_overflow, didn't know them). I fully subscribe to the paranoid approach. I understand you've already edited the patches in your tree, so I am not resending, just confirming my Signed-off-by: Giovanni Gherdovich > > --- a/arch/x86/kernel/smpboot.c > +++ b/arch/x86/kernel/smpboot.c > @@ -55,6 +55,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -2057,11 +2058,19 @@ static void init_freq_invariance(bool se > } > } > > +static void disable_freq_invariance_workfn(struct work_struct *work) > +{ > + static_branch_disable(&arch_scale_freq_key); > +} > + > +static DECLARE_WORK(disable_freq_invariance_work, > + disable_freq_invariance_workfn); > + > DEFINE_PER_CPU(unsigned long, arch_freq_scale) = SCHED_CAPACITY_SCALE; > > void arch_scale_freq_tick(void) > { > - u64 freq_scale; > + u64 freq_scale = SCHED_CAPACITY_SCALE; > u64 aperf, mperf; > u64 acnt, mcnt; > > @@ -2073,19 +2082,27 @@ void arch_scale_freq_tick(void) > > acnt = aperf - this_cpu_read(arch_prev_aperf); > mcnt = mperf - this_cpu_read(arch_prev_mperf); > - if (!mcnt) > - return; > > this_cpu_write(arch_prev_aperf, aperf); > this_cpu_write(arch_prev_mperf, mperf); > > - acnt <<= 2*SCHED_CAPACITY_SHIFT; > - mcnt *= arch_max_freq_ratio; > + if (check_shl_overflow(acnt, 2*SCHED_CAPACITY_SHIFT, &acnt)) > + goto error; > + > + if (check_mul_overflow(mcnt, arch_max_freq_ratio, &mcnt) || !mcnt) > + goto error; > > freq_scale = div64_u64(acnt, mcnt); > + if (!freq_scale) > + goto error; > > if (freq_scale > SCHED_CAPACITY_SCALE) > freq_scale = SCHED_CAPACITY_SCALE; > > this_cpu_write(arch_freq_scale, freq_scale); > + return; > + > +error: > + pr_warn("Scheduler frequency invariance went wobbly, disabling!\n"); > + schedule_work(&disable_freq_invariance_work); > }