Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935047AbcCPVgO (ORCPT ); Wed, 16 Mar 2016 17:36:14 -0400 Received: from v094114.home.net.pl ([79.96.170.134]:41554 "HELO v094114.home.net.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1750953AbcCPVgM (ORCPT ); Wed, 16 Mar 2016 17:36:12 -0400 From: "Rafael J. Wysocki" To: Peter Zijlstra Cc: Linux PM list , Juri Lelli , Steve Muckle , ACPI Devel Maling List , Linux Kernel Mailing List , Srinivas Pandruvada , Viresh Kumar , Vincent Guittot , Michael Turquette , Ingo Molnar Subject: Re: [PATCH v4 7/7] cpufreq: schedutil: New governor based on scheduler utilization data Date: Wed, 16 Mar 2016 22:38:14 +0100 Message-ID: <1520262.pnveEYDEnp@vostro.rjw.lan> User-Agent: KMail/4.11.5 (Linux/4.5.0-rc1+; KDE/4.11.5; x86_64; ; ) In-Reply-To: <20160316175211.GF6344@twins.programming.kicks-ass.net> References: <1711281.bPmSjlBT7c@vostro.rjw.lan> <11678919.CQLTrQTYxG@vostro.rjw.lan> <20160316175211.GF6344@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="utf-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1534 Lines: 43 On Wednesday, March 16, 2016 06:52:11 PM Peter Zijlstra wrote: > On Wed, Mar 16, 2016 at 03:59:18PM +0100, Rafael J. Wysocki wrote: > > +static void sugov_work(struct work_struct *work) > > +{ > > + struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); > > + > > + mutex_lock(&sg_policy->work_lock); > > + __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, > > + CPUFREQ_RELATION_L); > > + mutex_unlock(&sg_policy->work_lock); > > + > > Be aware that the below store can creep up and become visible before the > unlock. AFAICT that doesn't really matter, but still. It doesn't matter. :-) Had it mattered, I would have used memory barriers. > > + sg_policy->work_in_progress = false; > > +} > > + > > +static void sugov_irq_work(struct irq_work *irq_work) > > +{ > > + struct sugov_policy *sg_policy; > > + > > + sg_policy = container_of(irq_work, struct sugov_policy, irq_work); > > + schedule_work(&sg_policy->work); > > +} > > If you care what cpu the work runs on, you should schedule_work_on(), > regular schedule_work() can end up on any random cpu (although typically > it does not). I know, but I don't care too much. "ondemand" and "conservative" use schedule_work() for the same thing, so drivers need to cope with that if they need things to run on a particular CPU. That said I guess things would be a bit more efficient if the work was scheduled on the same CPU that had queued up the irq_work. It also wouldn't be too difficult to implement, so I'll make that change.