Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp4490810ybi; Mon, 15 Jul 2019 09:43:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqx5rvZnM6m9tmujGqUeyIIl0INoRtlLEjekFPn8n8Gf4Ifnxva0x4r+jvBY7AHzFFIdNmZc X-Received: by 2002:a63:fb4b:: with SMTP id w11mr27812898pgj.415.1563208979928; Mon, 15 Jul 2019 09:42:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563208979; cv=none; d=google.com; s=arc-20160816; b=hUwD3ilpo3H9qWpEQIDbpAHJvd2Nsf2uqpJ3FzoJRqF/PJU7mngAOiy7iDz8SH98wX /lpd6fosQ9V8vYZ2/rO7mT2buH7/dxTivLJi7QJafuBYHKDSWxx3niLcYJJzNZsFEvtz uYHBJw0qJZrK5eiiXEt8K9QLtvN9CXDwv0+cQs2OHb86y6A321rfDjG7Daua4ulQKm33 iBCRjuuyvrZayatWTmtQto+VXrVx7K3xjBvLRoECauXArpCiHXzCfuH7DI+hCOqKMoNo F2YnO3dHqi7y7doZV1uSUwF5E75vF4jzPfjo+ONTMrLtiSY6WeR/l68S3ktwDTEiPOaa 3tFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=pt1AzERhuk/TDYih6QEfwnuJbmAnkzYMPnoL5lp15Gs=; b=UbJGXPfv3nQDIkx+YL+jo+o0mdCEwEuQMhef8pS7T5RF42MVjjBrYRSTZFboy4QAvS CqsAERM0aIortfcmJtVtWHa+CmR+imCZTS/Us+KPx1HdiFA2o6+mZRkiDqke1v5m1K87 EkFe72O/QO9YjhBqT4OmQAM6RrGeNSScCe6wbsdZWVH1mcqFIZlCwSTqXZ3pXaELyD3M 0vNJmOYuvMS3wwZcc8uBxrGk33yYagDmzLK46FadqM7aRKN44mvuXNjYqqwgY/vopjSB TiRG6ljOdMrF4l0njVJb2GHW1SA2psaH8abqeNQiu2LlpB7hQmZuOqN6gAD/nRZmn74D NFKg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p11si15880211plq.208.2019.07.15.09.42.43; Mon, 15 Jul 2019 09:42:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731448AbfGOQmH (ORCPT + 99 others); Mon, 15 Jul 2019 12:42:07 -0400 Received: from mx2.suse.de ([195.135.220.15]:40512 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730503AbfGOQmG (ORCPT ); Mon, 15 Jul 2019 12:42:06 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 70162AF7A; Mon, 15 Jul 2019 16:42:05 +0000 (UTC) Date: Mon, 15 Jul 2019 18:42:00 +0200 From: Michal =?iso-8859-1?Q?Koutn=FD?= To: Patrick Bellasi Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan , Alessio Balsini Subject: Re: [PATCH v11 2/5] sched/core: uclamp: Propagate parent clamps Message-ID: <20190715164200.GA30862@blackbody.suse.cz> References: <20190708084357.12944-1-patrick.bellasi@arm.com> <20190708084357.12944-3-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190708084357.12944-3-patrick.bellasi@arm.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 08, 2019 at 09:43:54AM +0100, Patrick Bellasi wrote: > Since it's possible for a cpu.uclamp.min value to be bigger than the > cpu.uclamp.max value, ensure local consistency by restricting each > "protection" > (i.e. min utilization) with the corresponding "limit" (i.e. max > utilization). I think this constraint should be mentioned in the Documentation/.... > +static void cpu_util_update_eff(struct cgroup_subsys_state *css) > +{ > + struct cgroup_subsys_state *top_css = css; > + struct uclamp_se *uc_se = NULL; > + unsigned int eff[UCLAMP_CNT]; > + unsigned int clamp_id; > + unsigned int clamps; > + > + css_for_each_descendant_pre(css, top_css) { > + uc_se = css_tg(css)->parent > + ? css_tg(css)->parent->uclamp : NULL; > + > + for_each_clamp_id(clamp_id) { > + /* Assume effective clamps matches requested clamps */ > + eff[clamp_id] = css_tg(css)->uclamp_req[clamp_id].value; > + /* Cap effective clamps with parent's effective clamps */ > + if (uc_se && > + eff[clamp_id] > uc_se[clamp_id].value) { > + eff[clamp_id] = uc_se[clamp_id].value; > + } > + } > + /* Ensure protection is always capped by limit */ > + eff[UCLAMP_MIN] = min(eff[UCLAMP_MIN], eff[UCLAMP_MAX]); > + > + /* Propagate most restrictive effective clamps */ > + clamps = 0x0; > + uc_se = css_tg(css)->uclamp; (Nitpick only, reassigning child where was parent before decreases readibility. IMO) > + for_each_clamp_id(clamp_id) { > + if (eff[clamp_id] == uc_se[clamp_id].value) > + continue; > + uc_se[clamp_id].value = eff[clamp_id]; > + uc_se[clamp_id].bucket_id = uclamp_bucket_id(eff[clamp_id]); Shouldn't these writes be synchronized with writes from __setscheduler_uclamp()? >