Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751677AbcCVEvv (ORCPT ); Tue, 22 Mar 2016 00:51:51 -0400 Received: from mail-wm0-f42.google.com ([74.125.82.42]:35268 "EHLO mail-wm0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751080AbcCVEvt (ORCPT ); Tue, 22 Mar 2016 00:51:49 -0400 Message-ID: <1458622305.29862.9.camel@gmail.com> Subject: Re: [PATCH] cpufreq: governor: Always schedule work on the CPU running update From: Mike Galbraith To: Viresh Kumar , "Rafael J. Wysocki" Cc: Linux PM list , Linux Kernel Mailing List Date: Tue, 22 Mar 2016 05:51:45 +0100 In-Reply-To: <20160322025112.GQ27778@vireshk-i7> References: <3454445.v5QdUCRORG@vostro.rjw.lan> <20160322025112.GQ27778@vireshk-i7> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.16.5 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1749 Lines: 43 On Tue, 2016-03-22 at 08:21 +0530, Viresh Kumar wrote: > On 22-03-16, 01:17, Rafael J. Wysocki wrote: > > From: Rafael J. Wysocki > > > > Modify dbs_irq_work() to always schedule the process-context work > > on the current CPU which also ran the dbs_update_util_handler() > > that the irq_work being handled came from. > > > > This causes the entire frequency update handling (involving the > > "ondemand" or "conservative" governors) to be carried out by the > > CPU whose frequency is to be updated and reduces the overall amount > > of inter-CPU noise related to cpufreq. > > > > Signed-off-by: Rafael J. Wysocki > > --- > > drivers/cpufreq/cpufreq_governor.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > Index: linux-pm/drivers/cpufreq/cpufreq_governor.c > > =================================================================== > > --- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c > > +++ linux-pm/drivers/cpufreq/cpufreq_governor.c > > @@ -245,7 +245,7 @@ static void dbs_irq_work(struct irq_work > > struct policy_dbs_info *policy_dbs; > > > > policy_dbs = container_of(irq_work, > > structwq_unbound_cpumask policy_dbs_info, irq_work); > > - schedule_work(&policy_dbs->work); > > + schedule_work_on(smp_processor_id(), &policy_dbs->work); > > } > > > > static void dbs_update_util_handler(struct update_util_data *data, > > u64 time, > > queue_work() used to queue the work on local cpu by default, has that > changed now ? By default it still will, but the user now has the option to deflect work items with an unspecified target. These will land on a CPU included in wq_unbound_cpumask iff the current CPU is excluded therefrom. -Mike