Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965084AbcCPRwQ (ORCPT ); Wed, 16 Mar 2016 13:52:16 -0400 Received: from casper.infradead.org ([85.118.1.10]:37269 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750787AbcCPRwO (ORCPT ); Wed, 16 Mar 2016 13:52:14 -0400 Date: Wed, 16 Mar 2016 18:52:11 +0100 From: Peter Zijlstra To: "Rafael J. Wysocki" Cc: Linux PM list , Juri Lelli , Steve Muckle , ACPI Devel Maling List , Linux Kernel Mailing List , Srinivas Pandruvada , Viresh Kumar , Vincent Guittot , Michael Turquette , Ingo Molnar Subject: Re: [PATCH v4 7/7] cpufreq: schedutil: New governor based on scheduler utilization data Message-ID: <20160316175211.GF6344@twins.programming.kicks-ass.net> References: <1711281.bPmSjlBT7c@vostro.rjw.lan> <11678919.CQLTrQTYxG@vostro.rjw.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <11678919.CQLTrQTYxG@vostro.rjw.lan> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1175 Lines: 33 On Wed, Mar 16, 2016 at 03:59:18PM +0100, Rafael J. Wysocki wrote: > +static void sugov_work(struct work_struct *work) > +{ > + struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); > + > + mutex_lock(&sg_policy->work_lock); > + __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, > + CPUFREQ_RELATION_L); > + mutex_unlock(&sg_policy->work_lock); > + Be aware that the below store can creep up and become visible before the unlock. AFAICT that doesn't really matter, but still. > + sg_policy->work_in_progress = false; > +} > + > +static void sugov_irq_work(struct irq_work *irq_work) > +{ > + struct sugov_policy *sg_policy; > + > + sg_policy = container_of(irq_work, struct sugov_policy, irq_work); > + schedule_work(&sg_policy->work); > +} If you care what cpu the work runs on, you should schedule_work_on(), regular schedule_work() can end up on any random cpu (although typically it does not). In particular schedule_work() -> queue_work() -> queue_work_on(.cpu = WORK_CPU_UNBOUND) -> __queue_work() if (req_cpu == UNBOUND) cpu = wq_select_unbound_cpu(), which has a Round-Robin 'feature' to detect just such dependencies.