Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp3801018imc; Thu, 14 Mar 2019 05:44:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqyzwXVjsXj7ZY3Xki89zrwlTG9XO82xJROu+C8AttgLBMGEzSq/+ANcsxAaq6nWJimWMqKP X-Received: by 2002:a62:a509:: with SMTP id v9mr12628292pfm.64.1552567458999; Thu, 14 Mar 2019 05:44:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552567458; cv=none; d=google.com; s=arc-20160816; b=owstMeyVuwNV8QoeoyNL2+3CO30nO5zmKzm3UCi1MThoKCkhvkXJG13XvENaSB9EtH BFVQpBs903fnsY0QsJu7bYKKxjvVdZVtbGdTHcfsboy6Za/DOr3gESACV1HC8ElXx0Ru A+JdXfCyLy7lSAhI2daTkWU9kKI/z6InlHFVn795DExostpNBD0KwgtvPY2bodPl8D4w BKmQE4f7Z2OmCKG4XV1HNlK8d5bTv6WWQXizOftIGso0/Rf9EHka72YtQv9r6ErcXVzn gpAhVCsNiUIRPsx4+b/RasEFi8jZ9V04KPlJv108XgQT9BfBVHOzGOej8xVbZnilqhFz VQwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=89sGa58m9bRXjnA15zqOegikVVQ07IjrFrLTZh8a6TM=; b=et5kktShVeqMg5YY6gBe0QfsqVOSSfRVUFihT+kIDE3FdTohfwyomvkKtD7ONcRQk/ S/67pHsIfYaIpDKzyyAwwyE15NnJznA5/7LUzqYFU6eQlGJUT8Gb9+E+xytmzuhtk0aI ts03LSIGLOKOsB2BXSvJBD7mAGGYPDtQf2DIicxn/g2a9Oy1078fFID6jp7y+gTzgYi0 hipoRiUOzeQLXIbgdaw4/AUHDTSsstjpbW85+6rswxLEuLFEsBiYcSXpfdU5Llwctg34 l2YOC6cWV6AdLxh3naiUnoWQxmEcuBHcjgXmbwTBOAGZig5L0C2vkmfMDcQBCT78det9 2NdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 37si13652905plq.275.2019.03.14.05.44.00; Thu, 14 Mar 2019 05:44:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727538AbfCNMnR (ORCPT + 99 others); Thu, 14 Mar 2019 08:43:17 -0400 Received: from foss.arm.com ([217.140.101.70]:43696 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727129AbfCNMnR (ORCPT ); Thu, 14 Mar 2019 08:43:17 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A108C80D; Thu, 14 Mar 2019 05:43:16 -0700 (PDT) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA67B3F614; Thu, 14 Mar 2019 05:43:13 -0700 (PDT) Date: Thu, 14 Mar 2019 12:43:11 +0000 From: Patrick Bellasi To: Suren Baghdasaryan Cc: Peter Zijlstra , LKML , linux-pm@vger.kernel.org, linux-api@vger.kernel.org, Ingo Molnar , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle Subject: Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting Message-ID: <20190314124311.f6azk66rnwk4p6zx@e110439-lin> References: <20190208100554.32196-1-patrick.bellasi@arm.com> <20190208100554.32196-2-patrick.bellasi@arm.com> <20190313135238.GC5922@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13-Mar 14:23, Suren Baghdasaryan wrote: > On Wed, Mar 13, 2019 at 6:52 AM Peter Zijlstra wrote: > > > > On Fri, Feb 08, 2019 at 10:05:40AM +0000, Patrick Bellasi wrote: > > > +/* > > > + * When a task is enqueued on a rq, the clamp bucket currently defined by the > > > + * task's uclamp::bucket_id is reference counted on that rq. This also > > > + * immediately updates the rq's clamp value if required. > > > + * > > > + * Since tasks know their specific value requested from user-space, we track > > > + * within each bucket the maximum value for tasks refcounted in that bucket. > > > + * This provide a further aggregation (local clamping) which allows to track > > > + * within each bucket the exact "requested" clamp value whenever all tasks > > > + * RUNNABLE in that bucket require the same clamp. > > > + */ > > > +static inline void uclamp_rq_inc_id(struct task_struct *p, struct rq *rq, > > > + unsigned int clamp_id) > > > +{ > > > + unsigned int bucket_id = p->uclamp[clamp_id].bucket_id; > > > + unsigned int rq_clamp, bkt_clamp, tsk_clamp; > > > + > > > + rq->uclamp[clamp_id].bucket[bucket_id].tasks++; > > > + > > > + /* > > > + * Local clamping: rq's buckets always track the max "requested" > > > + * clamp value from all RUNNABLE tasks in that bucket. > > > + */ > > > + tsk_clamp = p->uclamp[clamp_id].value; > > > + bkt_clamp = rq->uclamp[clamp_id].bucket[bucket_id].value; > > > + rq->uclamp[clamp_id].bucket[bucket_id].value = max(bkt_clamp, tsk_clamp); > > > > So, if I read this correct: > > > > - here we track a max value in a bucket, > > > > > + rq_clamp = READ_ONCE(rq->uclamp[clamp_id].value); > > > + WRITE_ONCE(rq->uclamp[clamp_id].value, max(rq_clamp, tsk_clamp)); > > > +} > > > + > > > +/* > > > + * When a task is dequeued from a rq, the clamp bucket reference counted by > > > + * the task is released. If this is the last task reference counting the rq's > > > + * max active clamp value, then the rq's clamp value is updated. > > > + * Both the tasks reference counter and the rq's cached clamp values are > > > + * expected to be always valid, if we detect they are not we skip the updates, > > > + * enforce a consistent state and warn. > > > + */ > > > +static inline void uclamp_rq_dec_id(struct task_struct *p, struct rq *rq, > > > + unsigned int clamp_id) > > > +{ > > > + unsigned int bucket_id = p->uclamp[clamp_id].bucket_id; > > > + unsigned int rq_clamp, bkt_clamp; > > > + > > > + SCHED_WARN_ON(!rq->uclamp[clamp_id].bucket[bucket_id].tasks); > > > + if (likely(rq->uclamp[clamp_id].bucket[bucket_id].tasks)) > > > + rq->uclamp[clamp_id].bucket[bucket_id].tasks--; > > > + > > > + /* > > > + * Keep "local clamping" simple and accept to (possibly) overboost > > > + * still RUNNABLE tasks in the same bucket. > > > + */ > > > + if (likely(rq->uclamp[clamp_id].bucket[bucket_id].tasks)) > > > + return; > > > > (Oh man, I hope that generates semi sane code; long live CSE passes I > > suppose) > > > > But we never decrement that bkt_clamp value on dequeue. > > > > > + bkt_clamp = rq->uclamp[clamp_id].bucket[bucket_id].value; > > > + > > > + /* The rq's clamp value is expected to always track the max */ > > > + rq_clamp = READ_ONCE(rq->uclamp[clamp_id].value); > > > + SCHED_WARN_ON(bkt_clamp > rq_clamp); > > > + if (bkt_clamp >= rq_clamp) { > > > > head hurts, this reads ==, how can this ever not be so? > > > > > + /* > > > + * Reset rq's clamp bucket value to its nominal value whenever > > > + * there are anymore RUNNABLE tasks refcounting it. > > > > -ENOPARSE > > > > > + */ > > > + rq->uclamp[clamp_id].bucket[bucket_id].value = > > > + uclamp_bucket_value(rq_clamp); > > > > But basically you decrement the bucket value to the nominal value. > > > > > + uclamp_rq_update(rq, clamp_id); > > > + } > > > +} > > > > Given all that, what is to stop the bucket value to climbing to > > uclamp_bucket_value(+1)-1 and staying there (provided there's someone > > runnable)? > > > > Why are we doing this... ? > > I agree with Peter, this part of the patch was the hardest to read. > SCHED_WARN_ON line makes sense to me. The condition that follows and > the following comment are a little baffling. Condition seems to > indicate that the code that follows should be executed only if we are > in the top-most occupied bucket (the bucket which has tasks and has > the highest uclamp value). > So this bucket just lost its last task and we should update > rq->uclamp[clamp_id].value. Right. > However that's not exactly what the code does... It also resets > rq->uclamp[clamp_id].bucket[bucket_id].value. Right... > So if I understand correctly, unless the bucket that just lost its > last task is the top-most one its value will not be reset to nominal > value. That looks like a bug to me. Am I missing something? ... and I think you've got a point here! The reset to nominal value line should be done unconditionally. I'll move it outside its current block. Thanks for spotting it. > Side note: some more explanation would be very helpful. Will move that "bucket local max" management code into a separate patch as suggested by Peter. Hopefully that should make the logic more clear and allows me to add some notes in the changelog. -- #include Patrick Bellasi