Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2DDDC61DA4 for ; Thu, 9 Feb 2023 13:45:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229665AbjBINpR (ORCPT ); Thu, 9 Feb 2023 08:45:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229942AbjBINpO (ORCPT ); Thu, 9 Feb 2023 08:45:14 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AAF4EC6B for ; Thu, 9 Feb 2023 05:45:01 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id f16-20020a17090a9b1000b0023058bbd7b2so2430306pjp.0 for ; Thu, 09 Feb 2023 05:45:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=to:subject:message-id:date:from:in-reply-to:references:mime-version :from:to:cc:subject:date:message-id:reply-to; bh=UcE9vHrHYZoMIh/Y+TcSdqIYgW5L75DStcjcziG+YYU=; b=xg0+Rt/9kWtVCUNRgFve1XGNWPvCk14aIWFnvgIFzng8gcORFHgjz49mlfXASYwhYs awoA52mZ/gRut/O2SKaNnr6Y7C3C65jiEkRJQZEUBDuPYBchSIb43oOE7p+zeFE+S13R b2xTpEuiDXGHo8fcn3n8oTy71qxoa08zZQwHTGZb/OsFBi7pbheWTgWXzYxgnHSM15LB xEAumoW+gNmEIS+8ESalBuyLlalB5ObviIp38Bvc5zsHR7bAY4zoxtkCszoWaWMQ4Wnp wRLuNdprWQI8fvjYnjBC7h9sFdOpu8If1ixd0p46b1dysH/MtsG4nQ+uCAlitD4M07Ks DOFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=to:subject:message-id:date:from:in-reply-to:references:mime-version :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UcE9vHrHYZoMIh/Y+TcSdqIYgW5L75DStcjcziG+YYU=; b=LT188W6+hicSYzQRVfTenzp55H74Ih3lS32KVrENcY/ahGuO9liyYgEm0Lq+m05Ob/ IrrVsRL0v5eo8CGYCa5beOs3vhjF+ECBioJYekofCo5xK+81R5iHqnJEt+qmSQLSURWT mOLKtCbn1MO4wwCFbmZeITxdPkVvCEbA3J1rd3WeQMAh+tdum1nsOa+rPKr9KpazsY3P WFUxQLZnxRISnC6SDADbMKG4E8znlCwTjN4VoE5qadT+WOdLrYjORVpcnwmzIelnSmWA cs++KH2PUH7B0RBw71WA6TgYEHn5smFpmyBmLAhElmeeid5ObCrmM/WCIk8+DFSUnDzc k+jw== X-Gm-Message-State: AO0yUKX34SCnJEW+o2REANwDeRh5LvxCUGvXQs0ItebJG35Wq/CXpH5f hVL+nc/sbjCutY5q1jDwubZWkDicD3087NFRbHL2Pw== X-Google-Smtp-Source: AK7set+QLTqT4Ty1LVMGx55Fgityz/YyMe5sUGbXQyDTb/ksjlUq+YGTY0GYh1Tf8CRhHxE+QZQTVT7wvDKTF4RGfRk= X-Received: by 2002:a17:90a:b309:b0:233:4f0d:7 with SMTP id d9-20020a17090ab30900b002334f0d0007mr84774pjr.42.1675950300776; Thu, 09 Feb 2023 05:45:00 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Vincent Guittot Date: Thu, 9 Feb 2023 14:44:49 +0100 Message-ID: Subject: Re: [bug-report] possible s64 overflow in max_vruntime() To: Roman Kagan , Vincent Guittot , Chen Yu , Peter Zijlstra , Zhang Qiao , Waiman Long , Ingo Molnar , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , lkml Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 9 Feb 2023 at 14:33, Roman Kagan wrote: > > On Thu, Feb 09, 2023 at 12:26:12PM +0100, Vincent Guittot wrote: > > On Wed, 8 Feb 2023 at 19:09, Roman Kagan wrote: > > > On Wed, Feb 08, 2023 at 11:13:35AM +0100, Vincent Guittot wrote: > > > > On Tue, 7 Feb 2023 at 20:37, Roman Kagan wrote: > > > > > > > > > > On Tue, Jan 31, 2023 at 12:10:29PM +0100, Vincent Guittot wrote: > > > > > > On Tue, 31 Jan 2023 at 11:00, Roman Kagan wrote: > > > > > > > On Tue, Jan 31, 2023 at 11:21:17AM +0800, Chen Yu wrote: > > > > > > > > On 2023-01-27 at 17:18:56 +0100, Vincent Guittot wrote: > > > > > > > > > On Fri, 27 Jan 2023 at 12:44, Peter Zijlstra wrote: > > > > > > > > > > > > > > > > > > > > On Thu, Jan 26, 2023 at 07:31:02PM +0100, Roman Kagan wrote: > > > > > > > > > > > > > > > > > > > > > > All that only matters for small sleeps anyway. > > > > > > > > > > > > > > > > > > > > > > > > Something like: > > > > > > > > > > > > > > > > > > > > > > > > sleep_time = U64_MAX; > > > > > > > > > > > > if (se->avg.last_update_time) > > > > > > > > > > > > sleep_time = cfs_rq_clock_pelt(cfs_rq) - se->avg.last_update_time; > > > > > > > > > > > > > > > > > > > > > > Interesting, why not rq_clock_task(rq_of(cfs_rq)) - se->exec_start, as > > > > > > > > > > > others were suggesting? It appears to better match the notion of sleep > > > > > > > > > > > wall-time, no? > > > > > > > > > > > > > > > > > > > > Should also work I suppose. cfs_rq_clock takes throttling into account, > > > > > > > > > > but that should hopefully also not be *that* long, so either should > > > > > > > > > > work. > > > > > > > > > > > > > > > > > > yes rq_clock_task(rq_of(cfs_rq)) should be fine too > > > > > > > > > > > > > > > > > > Another thing to take into account is the sleeper credit that the > > > > > > > > > waking task deserves so the detection should be done once it has been > > > > > > > > > subtracted from vruntime. > > > > > > > > > > > > > > > > > > Last point, when a nice -20 task runs on a rq, it will take a bit more > > > > > > > > > than 2 seconds for the vruntime to be increased by more than 24ms (the > > > > > > > > > maximum credit that a waking task can get) so threshold must be > > > > > > > > > significantly higher than 2 sec. On the opposite side, the lowest > > > > > > > > > possible weight of a cfs rq is 2 which means that the problem appears > > > > > > > > > for a sleep longer or equal to 2^54 = 2^63*2/1024. We should use this > > > > > > > > > value instead of an arbitrary 200 days > > > > > > > > Does it mean any threshold between 2 sec and 2^54 nsec should be fine? Because > > > > > > > > 1. Any task sleeps longer than 2 sec will get at most 24 ms(sysctl_sched_latency) > > > > > > > > 'vruntime bonus' when enqueued. > > > > > > > > > > > > This means that if a task nice -20 runs on cfs rq while your task is > > > > > > sleeping 2seconds, the min vruntime of the cfs rq will increase by > > > > > > 24ms. If there are 2 nice -20 tasks then the min vruntime will > > > > > > increase by 24ms after 4 seconds and so on ... > > > > > > > > > > > > On the other side, a task nice 19 that runs 1ms will increase its > > > > > > vruntime by around 68ms. > > > > > > > > > > > > So if there is 1 task nice 19 with 11 tasks nice -20 on the same cfs > > > > > > rq, the nice -19 one should run 1ms every 65 seconds and this also > > > > > > means that the vruntime of task nice -19 should still be above > > > > > > min_vruntime after sleeping 60 seconds. Of course this is even worse > > > > > > with a child cgroup with the lowest weight (weight of 2 instead of 15) > > > > > > > > > > > > Just to say that 60 seconds is not so far away and 2^54 should be better IMHO > > > > > > > > > > If we go this route, what would be the proper way to infer this value? > > > > > Looks like > > > > > > > > > > (1ull << 63) / NICE_0_LOAD * scale_load(MIN_SHARES) > > > > > > > > (1ull << 63) / NICE_0_LOAD * MIN_SHARES > > > > > > On 64bit platforms NICE_0_LOAD == 1L << 20 (i.e. it's also scaled) for > > > better precision. So this will yield 2^63 / 2^20 * 2 = 2^44. Good > > > enough probably but confusing. > > > > Something like the below should be enough to explain the value > > > > /* > > * min_vruntime can move forward much faster than real time. The worst case > > * happens when an entity with the min weight always runs on the cfs rq. In this > > * case, the max comparison between vruntime and min_vruntime can fail after a > > * sleep greater than : > > * (1ull << 63) / NICE_0_LOAD) * MIN_SHARES > > Sorry if I'm being dense, but aren't NICE_0_LOAD and MIN_SHARES measured > in different units: the former is scaled while the latter is not? There are 2 usages of MIN_SHARES: - one when setting cgroup weight in __sched_group_set_shares() which uses scale_load(MIN_SHARES) - one when sharing this weight between the cfs of the group in calc_group_shares() : clamp_t(long, shares, MIN_SHARES, tg_shares) The 2nd one is the most important in our case that's why I use MIN_SHARES and not scale_load(MIN_SHARES) > > > * We can simplify this to : > > * (1ull << 63) / NICE_0_LOAD) > > */ > > #define SLEEP_VRUNTIME_OVERFLOW ((1ull << 63) / NICE_0_LOAD) > > Thanks, > Roman. > > > > Amazon Development Center Germany GmbH > Krausenstr. 38 > 10117 Berlin > Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss > Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B > Sitz: Berlin > Ust-ID: DE 289 237 879 > > >