Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932676AbcKPP0J (ORCPT ); Wed, 16 Nov 2016 10:26:09 -0500 Received: from merlin.infradead.org ([205.233.59.134]:40564 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753156AbcKPP0I (ORCPT ); Wed, 16 Nov 2016 10:26:08 -0500 Date: Wed, 16 Nov 2016 16:26:05 +0100 From: Peter Zijlstra To: Viresh Kumar Cc: Rafael Wysocki , Ingo Molnar , linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, Vincent Guittot , Juri Lelli , Robin Randhawa , Steve Muckle Subject: Re: [PATCH V2 3/4] cpufreq: schedutil: move slow path from workqueue to SCHED_FIFO task Message-ID: <20161116152605.GU3142@twins.programming.kicks-ass.net> References: <09f8fe694b4491bfd20272e8c7dc0f13f35eb34e.1479197311.git.viresh.kumar@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <09f8fe694b4491bfd20272e8c7dc0f13f35eb34e.1479197311.git.viresh.kumar@linaro.org> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1054 Lines: 27 On Tue, Nov 15, 2016 at 01:53:22PM +0530, Viresh Kumar wrote: > @@ -308,7 +313,21 @@ static void sugov_irq_work(struct irq_work *irq_work) > struct sugov_policy *sg_policy; > > sg_policy = container_of(irq_work, struct sugov_policy, irq_work); > + > + /* > + * For Real Time and Deadline tasks, schedutil governor shoots the > + * frequency to maximum. And special care must be taken to ensure that > + * this kthread doesn't result in that. > + * > + * This is (mostly) guaranteed by the work_in_progress flag. The flag is > + * updated only at the end of the sugov_work() and before that schedutil > + * rejects all other frequency scaling requests. > + * > + * Though there is a very rare case where the RT thread yields right > + * after the work_in_progress flag is cleared. The effects of that are > + * neglected for now. > + */ > + kthread_queue_work(&sg_policy->worker, &sg_policy->work); > } Right, so that's a wee bit icky, but its also entirely pre-existing code. Acked-by: Peter Zijlstra (Intel)