Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp2375430iof; Wed, 8 Jun 2022 03:44:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwUS8txzpdpcvieNfxU2FKu3d/UnalUiljI96NVWRqJavSD3iRT0eZ5uyqgvSBc/GMC9khf X-Received: by 2002:a05:6a00:26cf:b0:4f6:fc52:7b6a with SMTP id p15-20020a056a0026cf00b004f6fc527b6amr100290873pfw.39.1654685057309; Wed, 08 Jun 2022 03:44:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654685057; cv=none; d=google.com; s=arc-20160816; b=He3ruYksBQAliV3cKqjU/G88QlUwstTQfAI4xmEdWjc1h171sTW0jmvl377DJ3/7Wv ewMm3iZ3AHiE3U2ZqyNBC288f0O7gIKWfiqbTIvEH1SfSK+g84YiMJI6lzXVH+Qsoxd8 GqsevwpkVr71NQugIFKJ8mZ07DJ0SWUwgl7+rGM2vtOieI++o9SnZVw06hD232XzM/Km l+94TK1LUzzSd021cIuGQKGnxh8cnZkme9rC7ZJejuKDfJ2piS6/7frV3Pfu4VWFH/5O 4qBlUslRSW4csciFBkKDTbo9K9MNFAxbS7xUzzCfEI/NbvvOM1TNGJwwqqkVMJKPzJ+x wmZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :content-language:references:cc:to:subject:from:user-agent :mime-version:date:message-id; bh=d49iaXw4S+KWLiKfNIQBdmYzkgjZqNCOy9Aw5LPSh/E=; b=sEJL/Q56WDNU+OSVR18Xwy9twXPxSWtwBLnAoNReOWTgL2XznRdgUGDQRWxi+qXiEs rXab366qkeCbANAYvUYdghE5saUM5r6b730HFxx7CpqhyKzCPykeQO7yykKKwgfI4MXU oqC3w1m08RbtQsqQXeKvtVrEK25usDOsINthtayNuVE24DlT4y+mVh2neIS+swDPdxzM E3an7mB9nPCt++SH0c1gn46l3slvYKz7Z1Fd+WoSdx/gJR8aWm3W8rcBLtfW3pBNE45E xkshq6UAvOPqHFsQLjSpnQLL2VTdKrN3H7MJz57QPV9F9SDeDuG0g6YeNZefDm6hLcFk vjww== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id i70-20020a638749000000b003fe462deda3si405047pge.865.2022.06.08.03.44.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Jun 2022 03:44:17 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D9AB21B318A; Wed, 8 Jun 2022 03:10:53 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236201AbiFHKKB (ORCPT + 99 others); Wed, 8 Jun 2022 06:10:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236993AbiFHKJ3 (ORCPT ); Wed, 8 Jun 2022 06:09:29 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 56D48131F38 for ; Wed, 8 Jun 2022 02:54:01 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4A12D1424; Wed, 8 Jun 2022 02:54:01 -0700 (PDT) Received: from [192.168.178.6] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BA25A3F73B; Wed, 8 Jun 2022 02:53:59 -0700 (PDT) Message-ID: <2860f381-24e8-2950-388a-b984c4eb51f2@arm.com> Date: Wed, 8 Jun 2022 11:53:42 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1 From: Dietmar Eggemann Subject: Re: [PATCH v10 2/7] sched/fair: Decay task PELT values during wakeup migration To: Vincent Donnefort , peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, morten.rasmussen@arm.com, chris.redpath@arm.com, qperret@google.com, tao.zhou@linux.dev, kernel-team@android.com References: <20220607123254.565579-1-vdonnefort@google.com> <20220607123254.565579-3-vdonnefort@google.com> Content-Language: en-US In-Reply-To: <20220607123254.565579-3-vdonnefort@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/06/2022 14:32, Vincent Donnefort wrote: > From: Vincent Donnefort [...] > To that end, we need sched_clock_cpu() but it is a costly function. Limit > the usage to the case where the source CPU is idle as we know this is when > the clock is having the biggest risk of being outdated. In this such case, s/In this such case/in such a case > let's call it cfs_idle_lag the delta time between the rq_clock_pelt value > at rq idle and cfs_rq idle. s/it cfs_idle_lag the delta time between the rq_clock_pelt value at rq idle and cfs_rq idle./the delta time between the rq_clock_pelt value > at rq idle and cfs_rq idle cfs_idle_lag? And rq_idle_lag the delta between "now" and > the rq_clock_pelt at rq idle. > ---> > The estimated PELT clock is then: > > last_update_time (the cfs_rq's last_update_time) > + cfs_idle_lag (delta between cfs_rq's update and rq's update> + rq_idle_lag (delta between rq's update and now) > > last_update_time = cfs_rq_clock_pelt() > = rq_clock_pelt() - cfs->throttled_clock_pelt_time > > cfs_idle_lag = rq_clock_pelt()@rq_idle - > rq_clock_pelt()@cfs_rq_idle > > rq_idle_lag = sched_clock_cpu() - rq_clock()@rq_idle > > The rq_clock_pelt() from last_update_time being the same as > rq_clock_pelt()@cfs_rq_idle, we can write: > > estimation = rq_clock_pelt()@rq_idle - cfs->throttled_clock_pelt_time + > sched_clock_cpu() - rq_clock()@rq_idle > > The clocks being not accessible without the rq lock taken, some timestamps > are created: > > rq_clock_pelt()@rq_idle is rq->clock_pelt_idle > rq_clock()@rq_idle is rq->enter_idle > cfs->throttled_clock_pelt_time is cfs_rq->throttled_pelt_idle > <--- ^^^ This whole block seems to be the same information as the comment block in migrate_se_pelt_lag(). But you haven't updated it in v10. Maybe you can get rid of this here and point to the comment block in migrate_se_pelt_lag() from here instead to guarantee consistency? Otherwise they should be in sync. [...] > + /* > + * Estimated "now" is: last_update_time + cfs_idle_lag + rq_idle_lag, where: > + * > + * last_update_time (the cfs_rq's last_update_time) > + * = cfs_rq_clock_pelt()@cfs_rq_idle > + * = rq_clock_pelt()@cfs_rq_idle > + * - cfs->throttled_clock_pelt_time@cfs_rq_idle > + * > + * cfs_idle_lag (delta between cfs_rq's update and rq's update) Isn't this: (delta between rq's update and cfs_rq's update) ? > + * = rq_clock_pelt()@rq_idle - rq_clock_pelt()@cfs_rq_idle > + * > + * rq_idle_lag (delta between rq's update and now) Isn't this: (delta between now and rq's update) ? > + * = sched_clock_cpu() - rq_clock()@rq_idle > + * > + * We can then write: > + * > + * now = rq_clock_pelt()@rq_idle - cfs->throttled_clock_pelt_time + > + * sched_clock_cpu() - rq_clock()@rq_idle > + * Where: > + * rq_clock_pelt()@rq_idle is rq->clock_pelt_idle > + * rq_clock()@rq_idle is rq->clock_idle is rq->clock_pelt_idle is rq->clock_idle > + * cfs->throttled_clock_pelt_time@cfs_rq_idle is > + * cfs_rq->throttled_pelt_idle is cfs_rq->throttled_pelt_idle Maybe align the `is foo` for readability? [...]