Received: by 10.192.165.148 with SMTP id m20csp5302854imm; Wed, 9 May 2018 02:48:52 -0700 (PDT) X-Google-Smtp-Source: AB8JxZr2ERXBGhs+arARfRScZ+7Z32UGBPPdQ2kdrgnquG5tywre+WpLhAXahQn+gLiVBOwb5tog X-Received: by 2002:a63:41c7:: with SMTP id o190-v6mr36266411pga.57.1525859332792; Wed, 09 May 2018 02:48:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525859332; cv=none; d=google.com; s=arc-20160816; b=aE01XihL3GNfQ5Bcyvdzmj8R9XQq6XrG0eSychvtLS9F7slmmbphqnfzRnCk6caXNs 7p3dGlbrvY6NgUKgUkWZBiw+KuZ0KGZ28yjOYo1ksa5V6S1ORHVwVZdNhITQ2xoh2kzc b8H8gIuZFluQ0nVGB4brxVa0nTVlqSyKtJqDSf6bLAwmjkI0Sih71rXQasY9TsIdoevA 2fIbMylkt04S/fK+/Hl+El/NZwhoz6bTsjJOWlq1OWp5qGuNrh9oDvze5qrXSJToU9lz Uxh21b2zJhXsOfEClGYXwy1eIkdEdCS8pD2+Qx/Sg6E8nHajY5WMBCN1JHTXa2yyktCU Msog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=gLgvDRYmi0GDOCeYh80iChoEeNkSxx/Tybb5Am/D7Bo=; b=jEUpsXJm2Jn2/aYknVq76YT+2jVHM3QZcDge5bgeiHpd7VLemBpP8YdCMMuWVtOO5J VoDZDmnarEe+M+2c7rN4LwzQrLvtjdiq180Xrw1O1uzKGLulGup/9O5kjUjYJiZ4YZou egnTHVt3gJXN3sLADFNNLtuegcqfPl14EiA2f55ykJDEyy1jU1J11zFzREFK2uW/uV8J Te4HD/BPwDVUVwmKzuSLO0qqfwlKPUgOiEaGuzUlSBrIT+0tQCvU2x1nJr7XIGjARMxp bHEpXQS6eRSSbDLYiFY5jVlH99Lau/MfRztF7vk+vitXe6DtdFWYhsJSHovpuE7EU4Wl cFXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=joxXFzHJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u16-v6si16147498pgv.225.2018.05.09.02.48.38; Wed, 09 May 2018 02:48:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=joxXFzHJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934302AbeEIJsM (ORCPT + 99 others); Wed, 9 May 2018 05:48:12 -0400 Received: from mail-oi0-f41.google.com ([209.85.218.41]:38589 "EHLO mail-oi0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933715AbeEIJsH (ORCPT ); Wed, 9 May 2018 05:48:07 -0400 Received: by mail-oi0-f41.google.com with SMTP id k17-v6so30940657oih.5; Wed, 09 May 2018 02:48:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=gLgvDRYmi0GDOCeYh80iChoEeNkSxx/Tybb5Am/D7Bo=; b=joxXFzHJdJVHjSda0IIWn1hGFuMaCHOZwBAVbQEBwWxoGVinjB78Y5Q4gqCmLQjuC3 2rJgLRUNa5Hxi6R5N9+H/TRexoJGE6hTylQ8z/oFv8eEiVWxVFufBFZe5NyHcKc2dmjt ZdFnuq6Tk/D+BrVtxgS/PDySV52tyQftm5FtW6JvhQEgjXLQk4e0jP3fXwlFRZbr/BZF OgXJ80RAm4B5OYl23xu2/U/9MT5Hf06HdvV6spYPK8ILNxqjjfa8jfQyG5COdOqw8Xqb eeiznyOaznU4dUy7KieW8XZBOAeSFk76Unj2CNlKVruALw76VkvUTIX3FlTqvYFXj+n2 vg6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=gLgvDRYmi0GDOCeYh80iChoEeNkSxx/Tybb5Am/D7Bo=; b=gR5heR1o7YiLJx5RKAGX1Piy41UVe9a9g/Y+kU8O2FaPeKmSTZ7HOiHlQ3YEZePMb3 ianFsdioMXB0WNC5wvQV9SkEgGzRdBrlWg12n8gDF0b8xbLXuDKfRk6ha/1PH4HCanMX eFtg6WbWdc4of2mp1BQ6cwdtERHIeGxobZSF79sjq4+q5cHzEImPhZWwODIeBNO1C69y H0rda9IesghJ5AjCQot3DqjixS0sHp2RyfJFQVlYy7/7+6TmeAe6t0orBC0IzhBPwp2Y rM8Q/6AG09pjnnt2nob5D3bPSdBEzF87CFJjnSsWKN/iAp+7yGKpRZuU5YeDJCy6DVeC npqQ== X-Gm-Message-State: ALQs6tActcUu729D2RCsUs4a7pFNcs9PfTHJZIb+ga3TgAS2MMn7+RiY TnRJgZksN7vwA0CxOvGQHWmyDF4XgtyYBndYj9s= X-Received: by 2002:aca:5c89:: with SMTP id q131-v6mr26274977oib.154.1525859286829; Wed, 09 May 2018 02:48:06 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a9d:1468:0:0:0:0:0 with HTTP; Wed, 9 May 2018 02:48:06 -0700 (PDT) In-Reply-To: <20180509093932.GE76874@joelaf.mtv.corp.google.com> References: <1525704215-8683-1-git-send-email-claudio@evidence.eu.com> <20180508065435.bcht6dyb3rpp6gk5@vireshk-i7> <20180509045425.GA158882@joelaf.mtv.corp.google.com> <20180509064530.GA1681@localhost.localdomain> <20180509080644.GA76874@joelaf.mtv.corp.google.com> <20180509085116.GC76874@joelaf.mtv.corp.google.com> <20180509093932.GE76874@joelaf.mtv.corp.google.com> From: "Rafael J. Wysocki" Date: Wed, 9 May 2018 11:48:06 +0200 X-Google-Sender-Auth: hjTafPyWCgyA-rQsuC1VTuOnV4k Message-ID: Subject: Re: [RFC PATCH] sched/cpufreq/schedutil: handling urgent frequency requests To: Joel Fernandes Cc: "Rafael J. Wysocki" , Juri Lelli , Viresh Kumar , Claudio Scordino , Linux Kernel Mailing List , "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Patrick Bellasi , Luca Abeni , Joel Fernandes , Linux PM Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 9, 2018 at 11:39 AM, Joel Fernandes wrote: > On Wed, May 09, 2018 at 11:06:24AM +0200, Rafael J. Wysocki wrote: >> On Wed, May 9, 2018 at 10:51 AM, Joel Fernandes wrote: >> > On Wed, May 09, 2018 at 10:30:37AM +0200, Rafael J. Wysocki wrote: >> >> On Wed, May 9, 2018 at 10:06 AM, Joel Fernandes wrote: >> >> > On Wed, May 09, 2018 at 08:45:30AM +0200, Juri Lelli wrote: >> >> >> On 08/05/18 21:54, Joel Fernandes wrote: >> >> >> >> >> >> [...] >> >> >> >> >> >> > Just for discussion sake, is there any need for work_in_progress? If we can >> >> >> > queue multiple work say kthread_queue_work can handle it, then just queuing >> >> >> > works whenever they are available should be Ok and the kthread loop can >> >> >> > handle them. __cpufreq_driver_target is also protected by the work lock if >> >> >> > there is any concern that can have races... only thing is rate-limiting of >> >> >> > the requests, but we are doing a rate limiting, just not for the "DL >> >> >> > increased utilization" type requests (which I don't think we are doing at the >> >> >> > moment for urgent DL requests anyway). >> >> >> > >> >> >> > Following is an untested diff to show the idea. What do you think? >> >> >> > >> >> >> > thanks, >> >> >> > >> >> >> > - Joel >> >> >> > >> >> >> > ----8<--- >> >> >> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c >> >> >> > index d2c6083304b4..862634ff4bf3 100644 >> >> >> > --- a/kernel/sched/cpufreq_schedutil.c >> >> >> > +++ b/kernel/sched/cpufreq_schedutil.c >> >> >> > @@ -38,7 +38,6 @@ struct sugov_policy { >> >> >> > struct mutex work_lock; >> >> >> > struct kthread_worker worker; >> >> >> > struct task_struct *thread; >> >> >> > - bool work_in_progress; >> >> >> > >> >> >> > bool need_freq_update; >> >> >> > }; >> >> >> > @@ -92,16 +91,8 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) >> >> >> > !cpufreq_can_do_remote_dvfs(sg_policy->policy)) >> >> >> > return false; >> >> >> > >> >> >> > - if (sg_policy->work_in_progress) >> >> >> > - return false; >> >> >> > - >> >> >> > if (unlikely(sg_policy->need_freq_update)) { >> >> >> > sg_policy->need_freq_update = false; >> >> >> > - /* >> >> >> > - * This happens when limits change, so forget the previous >> >> >> > - * next_freq value and force an update. >> >> >> > - */ >> >> >> > - sg_policy->next_freq = UINT_MAX; >> >> >> > return true; >> >> >> > } >> >> >> > >> >> >> > @@ -129,7 +120,6 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, >> >> >> > policy->cur = next_freq; >> >> >> > trace_cpu_frequency(next_freq, smp_processor_id()); >> >> >> > } else { >> >> >> > - sg_policy->work_in_progress = true; >> >> >> > irq_work_queue(&sg_policy->irq_work); >> >> >> >> >> >> Isn't this potentially introducing unneeded irq pressure (and doing the >> >> >> whole wakeup the kthread thing), while the already active kthread could >> >> >> simply handle multiple back-to-back requests before going to sleep? >> >> > >> >> > How about this? Will use the latest request, and also doesn't do unnecessary >> >> > irq_work_queue: >> >> > >> >> > (untested) >> >> > -----8<-------- >> >> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c >> >> > index d2c6083304b4..6a3e42b01f52 100644 >> >> > --- a/kernel/sched/cpufreq_schedutil.c >> >> > +++ b/kernel/sched/cpufreq_schedutil.c >> >> > @@ -38,7 +38,7 @@ struct sugov_policy { >> >> > struct mutex work_lock; >> >> > struct kthread_worker worker; >> >> > struct task_struct *thread; >> >> > - bool work_in_progress; >> >> > + bool work_in_progress; /* Has kthread been kicked */ >> >> > >> >> > bool need_freq_update; >> >> > }; >> >> > @@ -92,9 +92,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) >> >> > !cpufreq_can_do_remote_dvfs(sg_policy->policy)) >> >> > return false; >> >> > >> >> > - if (sg_policy->work_in_progress) >> >> > - return false; >> >> > - >> >> >> >> Why this change? >> >> >> >> Doing the below is rather pointless if work_in_progress is set, isn't it? >> > >> > The issue being discussed is that if a work was already in progress, then new >> > frequency updates will be dropped. So say even if DL increased in >> > utilization, nothing will happen because if work_in_progress = true and >> > need_freq_update = true, we would skip an update. In this diff, I am >> > allowing the frequency request to be possible while work_in_progress is true. >> > In the end the latest update will be picked. >> >> I'm not sure if taking new requests with the irq_work in flight is a good idea. > > That's the point of the original $SUBJECT patch posted by Claudio :) In that > you can see if urgent_request, then work_in_progress isn't checked. > > Also I don't see why we cannot do this with this small tweak as in my diff. > It solves a real problem seen with frequency updates done with the > slow-switch as we discussed at OSPM. OK > But let me know if I missed your point or something ;) > >> >> >> >> >> You'll drop the results of it on the floor going forward anyway then AFAICS. >> > >> > Why? >> >> Because you cannot queue up a new irq_work before the previous one is complete? > > We are not doing that. If you see in my diff, I am not queuing an irq_work if > one was already queued. What we're allowing is an update to next_freq. We > still use work_in_progress but don't use it to ban all incoming update > requests as done previously. Instead we use work_in_progress to make sure > that we dont unnecessarily increase the irq pressure and have excessive wake > ups (as Juri suggested). > > I can clean it up and post it as a patch next week after some testing incase > that's less confusing. Yeah, that would help. :-) > This week I'm actually on vacation and the diff was pure vacation hacking ;-) No worries. Thanks, Rafael