Received: by 2002:ab2:710b:0:b0:1ef:a325:1205 with SMTP id z11csp1054804lql; Tue, 12 Mar 2024 06:18:31 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWDH+2lbatX68Gyt71HPS7dIftjEHIVl7BBuW8IVMpvhnwAiOV2pniBy5LMKjCSRbnOjP5Cu2hIgaeEtpQg9KKkboGNxFCv0XH61bjLQg== X-Google-Smtp-Source: AGHT+IH6bVPRU7ElQRswKEn4TJ30Y4qo9cj17bFmuTpMgl9itRnBa+D52Z86k24bLXCRrNUO1qYA X-Received: by 2002:a50:c30d:0:b0:565:cc2d:533 with SMTP id a13-20020a50c30d000000b00565cc2d0533mr6472396edb.1.1710249510840; Tue, 12 Mar 2024 06:18:30 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1710249510; cv=pass; d=google.com; s=arc-20160816; b=KZOxRJ+t5ClilwjJw8AARU7j/6I6QFKGxeYLD3VHqvEz2TmNu460jrJZA0+yeK+N+x M2y6uHEpoLgSsk3WdpVV7DwPOjZXKIF9ksJCUxQ6ufZrllEB1eqKDFrN7qENp8f/Yl5w dU51iEsgzVx8zsp+soIjuScX5nPtgQ0AUEEUyG6dSkZWimyh2Spvv9znXmvl+crI/SRg xHXZxlKw5WAE8tiWC3e5V30LhKSBIRknEdiKTMJP3boPiHNo1AVHPe7OC/FIvnC5piGk BEVdCrARHvRaqJHjx7CdolH7nOilAVO7u1FXVYKEEfL71ySz3Nuqs4vGO5mZNU05TpLu WaUw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=hFbYr7MDNe89Tc0nUe3aZvU3CTHqj8i/tCsHwe7X/Zs=; fh=vBGZy8B2UHmGlzybftG/bx8DUR+WvnqbGsfPkrZmP7s=; b=g4zenuJr9EmoLqnn5Oiultjr3L/I87y0VJIPix21kc8dO1dWeIC46E3LC+eXYIUn9M 2VgoYVcpL41xygOXZsJUPSxft8QM0noW0h64OK0W5l/QKyMnA9MNxKMjDxWuxI9wKwbl kvuDP5W9CRKJp24fPvpe9Lr0J8nmURRxie3QX+HfSbaEATJldm169ZZtnsw4JXysnOGx vZ1i5qPt9GvUrUKI8fHeP7PoQfi0mLE6fAu2bZcukchTmmTai5rNQ1RIv8hp7RUY1Fzd /epen1p8RpIzDTPqc8xns7roCq4ASy1WjXxN1lvyLuGLprerP/RlRpAA84TPqS+SvPj1 nyFQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-100281-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-100281-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id fe9-20020a056402390900b005684f6ea41bsi2497519edb.490.2024.03.12.06.18.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Mar 2024 06:18:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-100281-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-100281-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-100281-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 84F371F235F1 for ; Tue, 12 Mar 2024 13:18:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 42BB07A146; Tue, 12 Mar 2024 13:18:20 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3DCCF78298 for ; Tue, 12 Mar 2024 13:18:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710249499; cv=none; b=Igz47ulcb9gTFkrKCjuwOGXOVaGhL+tg8Mc6VIILHPyXScyW35rmiZTFva8pIOMg6gzrn5pgp9JpSPv6jxownOzHMV+C599B9duZUJeHCg41/YqFSB/xTrMB37Yjd2APPc29tLVMLzB1V/gl8rLy0/0yOwVOZ04gmc23gfLIPOk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710249499; c=relaxed/simple; bh=5drlWpMH3WmAowsnVmEKhpqxAho8Tv6BCyVMktLzY68=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=Lu4SnH1fkrHVe3X/Cqhy7JAyxsn9I8E7ar95jB4rvhtpprg9ckq66f1FLSacyeTD3krOu341F51+1IzJfkMSWoO9WqF+7WSEnKfNgePSA0cul2FSKQewqpcTNkgCux7Byg9BG+9bmrMQXANbj26PspxxNCXkpmn9E5HzmT2E+FA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 808EB1007; Tue, 12 Mar 2024 06:18:53 -0700 (PDT) Received: from [10.57.11.232] (unknown [10.57.11.232]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A13863F762; Tue, 12 Mar 2024 06:18:13 -0700 (PDT) Message-ID: <65f3016c-c060-4d74-ad0f-d1981d1c6eeb@arm.com> Date: Tue, 12 Mar 2024 14:18:11 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] sched/fair: simplify __calc_delta() Content-Language: en-US To: Dawei Li Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org References: <20240306222838.15087-1-daweilics@gmail.com> From: Pierre Gondois In-Reply-To: <20240306222838.15087-1-daweilics@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Hello Dawei, On 3/6/24 23:28, Dawei Li wrote: > Based on how __calc_delta() is called now, the input parameter, weight > is always NICE_0_LOAD. I think we don't need it as an input parameter > now? Maybe 5e963f2bd4654a202a8a05aa3a86cb0300b10e6c ("sched/fair: Commit to EEVDF") should be referenced to explain that the case where (weight =< lw.weight) doesn't exist anymore and that NICE_0_LOAD could be incorporated in __calc_delta() directly. Also I think indirect forms are preferred in general: "I think we don't need it as an input parameter now ?" -> "The 'weight' parameter doesn't seem to be required anymore" (same note for the whole commit message) > > Also, when weight is always NICE_0_LOAD, the initial fact value is > always 2^10, and the first fact_hi will always be 0. Thus, we can get > rid of the first if bock. > > The previous comment "(delta_exec * (weight * lw->inv_weight)) >> > WMULT_SHIFT" seems to be assuming that lw->weight * lw->inv_weight is > always (approximately) equal to 2^WMULT_SHIFT. However, when > CONFIG_64BIT is set, lw->weight * lw->inv_weight is (approximately) > equal to 2^WMULT_SHIFT * 2^10. What remains true for both CONFIG_32BIT > and CONFIG_64BIT is: scale_load_down(lw->weight) * lw->inv_weight is > (approximately) equal to 2^WMULT_SHIFT. (Correct me if I am wrong.) I think the comment is more about explaining that: X * lw.weight equals: X * lw->inv_weight >> WMULT_SHIFT Also, if CONFIG_64BIT is set, we should have: weight / lw.weight == scale_load_down(lw->weight) * 2**10 * lw->inv_weight So IIUC, either both lines should be update, either none. (meaning that: delta_exec * NICE_0_LOAD / lw->weight should be changed to delta_exec * scale_load_down(NICE_0_LOAD) / lw->weight ) I assume it's better to let the comment as is. > > Also updated the comment for calc_delta_fair() to make it more > accurate. > > Signed-off-by: Dawei Li > --- > kernel/sched/fair.c | 29 ++++++++++------------------- > 1 file changed, 10 insertions(+), 19 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 6a16129f9a5c..c5cdb15f7d62 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -252,32 +252,23 @@ static void __update_inv_weight(struct load_weight *lw) > } > > /* > - * delta_exec * weight / lw.weight > + * delta_exec * NICE_0_LOAD / lw->weight > * OR > - * (delta_exec * (weight * lw->inv_weight)) >> WMULT_SHIFT > + * (delta_exec * scale_load_down(NICE_0_LOAD) * lw->inv_weight) >> WMULT_SHIFT > * > - * Either weight := NICE_0_LOAD and lw \e sched_prio_to_wmult[], in which case > - * we're guaranteed shift stays positive because inv_weight is guaranteed to > - * fit 32 bits, and NICE_0_LOAD gives another 10 bits; therefore shift >= 22. > - * > - * Or, weight =< lw.weight (because lw.weight is the runqueue weight), thus > - * weight/lw.weight <= 1, and therefore our shift will also be positive. > + * We're guaranteed shift stays positive because inv_weight is guaranteed to > + * fit 32 bits, and scale_load_down(NICE_0_LOAD) gives another 10 bits; > + * therefore shift >= 22. > */ > -static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) > +static u64 __calc_delta(u64 delta_exec, struct load_weight *lw) > { > - u64 fact = scale_load_down(weight); > - u32 fact_hi = (u32)(fact >> 32); > + u64 fact = scale_load_down(NICE_0_LOAD); > + int fact_hi; > int shift = WMULT_SHIFT; > int fs; NIT: maybe re-ordering the variables to have a reverse tree Otherwise, the patch looks good to me, Regards, Pierre