Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp628078imm; Fri, 17 Aug 2018 04:05:49 -0700 (PDT) X-Google-Smtp-Source: AA+uWPyaoUWMg0EyAD1QrQXzAVBICwAxFPuOxxcnmjA+9Z0rDg7FuPbo6TybCgrAbZ15ajcvBMWm X-Received: by 2002:a63:8c0b:: with SMTP id m11-v6mr32616452pgd.372.1534503949908; Fri, 17 Aug 2018 04:05:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534503949; cv=none; d=google.com; s=arc-20160816; b=Sm0EK7wiPBcrahTY0r5B68yqrUnc08h/YVAdL+3dwx9l77+RjRTk39wtmE4CPeaGWV ZRiTbijxNLRidyvKPaxllxka76ZpVV23UlJdoPTDNOskmNoWHJqd+V93kWdABm+2c5PS wgqidSjiKd5X7Hnfmshsh66UXCDnvzSxgtpBw5ZRPmlSRUviMkEX8usaf0SnkZ9oeWQs 6mKSkpoa3vtaZ97VU48xdr/sf0mRsU5m9R5X3jATPIu40uxebTk9lCNuEpBy6K5q63XB At+iz65nh5HbW7ySo4f9PgOKRWW/UT2LJjNHgDszX0dg50FBfvO0WV1PJrzVhI1JPy9g ViAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=PFXf0ECxkoZanLt49ALeyRdJUVNAa+04076+K96OAvM=; b=CYzTzQBqNm5UmZtkMeQBP8+O7AiT8Spd5As1yXvhocQ2BW6DoRe0z0kcBSo0BQpy2q KJoo1eMA1BVEnftTl/vPqHEh2/OLt7U9Yn9kouCUJv3dhBjGp1JGoINRf4RHlJXlzGgS GNUGjR3HQecZoD+CM2CLYzVi49gMSStSym7f2cfYa2h7uhQXgjqVjOyUWQqeKVrh7/Ne ton1ntACZi92tYGWPFTrm1KeLrX4KkJXsPEVlVwB9yzoSK5uBTjjTqGTwqzhmineDJsx V/ZioJWIimAYWCzH/R4RY9yEv3SSW2aWYg+ag8bCcCmXyzFjXSO/aU140pjxCYx9rHIE 2Oag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 3-v6si1914603plx.173.2018.08.17.04.05.34; Fri, 17 Aug 2018 04:05:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726497AbeHQOH2 (ORCPT + 99 others); Fri, 17 Aug 2018 10:07:28 -0400 Received: from foss.arm.com ([217.140.101.70]:46050 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725992AbeHQOH2 (ORCPT ); Fri, 17 Aug 2018 10:07:28 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0F187A9; Fri, 17 Aug 2018 04:04:26 -0700 (PDT) Received: from e110439-lin (e110439-lin.Emea.Arm.com [10.4.12.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 21F1A3F5BD; Fri, 17 Aug 2018 04:04:23 -0700 (PDT) Date: Fri, 17 Aug 2018 12:04:21 +0100 From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: Re: [PATCH v3 03/14] sched/core: uclamp: add CPU's clamp groups accounting Message-ID: <20180817110421.GJ2960@e110439-lin> References: <20180806163946.28380-1-patrick.bellasi@arm.com> <20180806163946.28380-4-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180806163946.28380-4-patrick.bellasi@arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06-Aug 17:39, Patrick Bellasi wrote: [...] > +/** > + * uclamp_cpu_get_id(): increase reference count for a clamp group on a CPU > + * @p: the task being enqueued on a CPU > + * @rq: the CPU's rq where the clamp group has to be reference counted > + * @clamp_id: the utilization clamp (e.g. min or max utilization) to reference > + * > + * Once a task is enqueued on a CPU's RQ, the clamp group currently defined by > + * the task's uclamp.group_id is reference counted on that CPU. > + */ > +static inline void uclamp_cpu_get_id(struct task_struct *p, > + struct rq *rq, int clamp_id) > +{ > + struct uclamp_group *uc_grp; > + struct uclamp_cpu *uc_cpu; > + int clamp_value; > + int group_id; > + > + /* No task specific clamp values: nothing to do */ > + group_id = p->uclamp[clamp_id].group_id; > + if (group_id == UCLAMP_NOT_VALID) > + return; This is broken for util_max aggression. By not refcounting tasks without a task specific clamp value, we end up enforcing a max_util to these tasks when they are co-scheduled with another max clamped task. I need to fix this by removing this "optimization" (working just for util_min) and refcount all the tasks. > + > + /* Reference count the task into its current group_id */ > + uc_grp = &rq->uclamp.group[clamp_id][0]; > + uc_grp[group_id].tasks += 1; > + > + /* > + * If this is the new max utilization clamp value, then we can update > + * straight away the CPU clamp value. Otherwise, the current CPU clamp > + * value is still valid and we are done. > + */ > + uc_cpu = &rq->uclamp; > + clamp_value = p->uclamp[clamp_id].value; > + if (uc_cpu->value[clamp_id] < clamp_value) > + uc_cpu->value[clamp_id] = clamp_value; > +} > + -- #include Patrick Bellasi