Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp543898pxv; Thu, 1 Jul 2021 04:09:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyhHGYFNWRv+9yicWerOq6jY30yEunPz72RHS+I9QSR7ySysphDzTwPOLtK9Ro7RnwClvQx X-Received: by 2002:a05:6e02:ee2:: with SMTP id j2mr23172068ilk.63.1625137763938; Thu, 01 Jul 2021 04:09:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625137763; cv=none; d=google.com; s=arc-20160816; b=GQBrrwy9U/uPcbaIHcWiDvxa5iLLbEkng8uM9qiwtG2Res+u8tGCPxzqOoNYK33a8S 4mcqpyEoifoQag1xu2UKW6f4CQ+cmZq3hnsdZX6/ir0Emr3qLm1CWVMO1dXGgedWWv4e K5a4bnxEoGe3Z5pOA6apIKE+tew9bCt7yz9skqBXClugEie6UporTMIoJl3ET5QeRptP YJhaxjqPU8yY6uL20pKpz8lmfMMy2GGfygAoVJaGeDZwReVUK9gz5vYC76q5XU4ZkM7N eyVo5eIzYyQDoA4ZXKaR+fmviozhHwe/Bx6X067DnN3TambgeFvsY1sQCXUpovCTrP42 9kYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=OtDfK9/hJRSGc3mfMUKqa304KpKILYqIHP85yvrbED4=; b=GYWFT6eQ6j45s1mSirAlD+soKtxHn99FbHsBNKa80orwuBygVSOaAyU86JoZNQuXtT sKGHELLTQwfGaXjOHkwiMKUKOTykBnZcbxoT3hTbX/lLL36xh9XhVhNRbclufvctiIox tfb4KG+bOguqN9wi/QvreM3cfUWBJOUfuCGDM0YliVMJy3EAZN5PriczjpYLLQZlK3W/ 7Wpoebi3jKxJgAJxGb80d/UzStVOMOavxkgiAHEBsnoylGi/kFSWLgaf7CzniHzmn9oW 4uM28jehkPukosTujCe4p08v/H6miwDmDZdHtlDSzFTF02mrxYj6Y1OXeQ/mLD47hwzn S70w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d2si14041735iod.84.2021.07.01.04.09.11; Thu, 01 Jul 2021 04:09:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236125AbhGALKh (ORCPT + 99 others); Thu, 1 Jul 2021 07:10:37 -0400 Received: from foss.arm.com ([217.140.110.172]:51590 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236052AbhGALKh (ORCPT ); Thu, 1 Jul 2021 07:10:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C56BC6D; Thu, 1 Jul 2021 04:08:06 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (unknown [10.1.195.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5A1783F718; Thu, 1 Jul 2021 04:08:05 -0700 (PDT) Date: Thu, 1 Jul 2021 12:08:03 +0100 From: Qais Yousef To: Quentin Perret Cc: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rickyiu@google.com, wvw@google.com, patrick.bellasi@matbug.net, xuewen.yan94@gmail.com, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v3 1/3] sched: Fix UCLAMP_FLAG_IDLE setting Message-ID: <20210701110803.2lka3eaoukbb6b4p@e107158-lin.cambridge.arm.com> References: <20210623123441.592348-1-qperret@google.com> <20210623123441.592348-2-qperret@google.com> <20210630145848.htb7pnwsl2gao77x@e107158-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/01/21 10:07, Quentin Perret wrote: > On Wednesday 30 Jun 2021 at 15:45:14 (+0000), Quentin Perret wrote: > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index b094da4c5fea..c0b999a8062a 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -980,7 +980,6 @@ static inline void uclamp_idle_reset(struct rq *rq, enum uclamp_id clamp_id, > > if (!(rq->uclamp_flags & UCLAMP_FLAG_IDLE)) > > return; > > > > - rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE; > > WRITE_ONCE(rq->uclamp[clamp_id].value, clamp_value); > > } > > > > @@ -1253,6 +1252,10 @@ static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p) > > > > for_each_clamp_id(clamp_id) > > uclamp_rq_inc_id(rq, p, clamp_id); > > + > > + /* Reset clamp idle holding when there is one RUNNABLE task */ > > + if (rq->uclamp_flags & UCLAMP_FLAG_IDLE) > > + rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE; > > } > > > > static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p) > > @@ -1300,6 +1303,13 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id) > > if (p->uclamp[clamp_id].active) { > > uclamp_rq_dec_id(rq, p, clamp_id); > > uclamp_rq_inc_id(rq, p, clamp_id); > > + > > + /* > > + * Make sure to clear the idle flag if we've transiently reached > > + * 0 uclamp active tasks on the rq. > > + */ > > + if (rq->uclamp_flags & UCLAMP_FLAG_IDLE) > > + rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE; > > Bah, now that I had coffee I realize this has the exact same problem. > Let me look at this again ... Hehe uclamp has this effect. It's all obvious, until it's not :-) Yes this needs to be out of the loop. Thanks for looking at this! Cheers -- Qais Yousef