Received: by 10.192.165.148 with SMTP id m20csp5235677imm; Wed, 9 May 2018 01:31:12 -0700 (PDT) X-Google-Smtp-Source: AB8JxZphhlO8U1GuCvjuK20Ko0R+b/8rZd6iS/0iZjAPTFzN30w5Q+uFhkNeLqsA5D2V4FBL8MjC X-Received: by 10.98.245.139 with SMTP id b11mr43284923pfm.113.1525854672351; Wed, 09 May 2018 01:31:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525854672; cv=none; d=google.com; s=arc-20160816; b=HP6u8u9IBy39f2yh99p5vxO5Yx+LsorH3zbVxZK90BKBb0ccul5cRuYU82F4epDQUI +QkuDN2kC4IYWjCNG0AHNeDoknIBCJBdjd8mGLBDc3RZZOb7Ty2NBLhpc2H9rS32vr08 qpg2OTtgRJMHGsG7CTeibWbKnUvyt7NAH71dgWtPk4yi9jtYqVENuSj9edbBl8V6krwn 60kNmL8NhAYVI5i9W+wfM/eIwchBCYZpPflVhWuzGySZpn+82tZCVq1KvUJjactbIo8e Fegoq0MwYMfTxPtuE8lGJgLoovdikreQZrVeLaPW1j10TgjyR7AsSYo6Tit270nRo/02 BaQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=5+bGjWYW5CLzMZbId2TF8a06FRYNTova1BulTcu6nQs=; b=imu1A6AnXLsqae4MPzFz/Aav1ZvDbIbxOjNpe+lY5SrMOcu3EEH1ry3jFaRf7Jiwxk utHPQ9jSyhur8ZfsMUqK1ig9zZd+/yxH/zq4WKRJdDKCFv1Ll5F75os5EHVxSiX3p0g6 EK437e0w3zSQ7D9+0IV4XeMpGpAWTZRHN1NoGd9fbCFQyBH5JWMpHjGL1WRt+AnNzFKI aXkXRteQzwcTeDHzjpJuRzi5YwpahdgG76KuPWNa/vWQTwbNjam98REFKbAnpxL036WO B9zrOFLJMopgmRiNSPVij0QrerMSvCE1YNinJBTo1E7HOxmV2B5fIunRbzJYL5izAoOO W3gQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=JZd4HXEx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l131-v6si2940411pga.61.2018.05.09.01.30.57; Wed, 09 May 2018 01:31:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=JZd4HXEx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934007AbeEIIal (ORCPT + 99 others); Wed, 9 May 2018 04:30:41 -0400 Received: from mail-ot0-f194.google.com ([74.125.82.194]:36429 "EHLO mail-ot0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933715AbeEIIai (ORCPT ); Wed, 9 May 2018 04:30:38 -0400 Received: by mail-ot0-f194.google.com with SMTP id m11-v6so33049647otf.3; Wed, 09 May 2018 01:30:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=5+bGjWYW5CLzMZbId2TF8a06FRYNTova1BulTcu6nQs=; b=JZd4HXEx9DEH+GFkdLeqirgIpdYCVOTFxsHjwSWov3cVCZjW7WfTkpbxNbcINrwf7r kksD9Xo0aISLEgoJ8djzAqls0h8ThkT2j+D1HA7YC7/2sqD2uXAC6ebxmdD5f0ShdjL6 ezWSpKjTFap5uZevb/7M6aDJNon+VFxYdHXBQ4o5VWbQevqAO1KYs6x1oKhVOGQ8P1bi xiyYSTAVs7VbNLwPW1aTiPLyetEtfVI8EjcESQcM9JNLfjo8G9Dxh99IXJjOppNl9abl vr2nIZNWzdTOnecMNArmg7qkmNKn3kJUtQRx3mVCN98z+lZ2wYj4yzLVPO1ieL9QA06I bv9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=5+bGjWYW5CLzMZbId2TF8a06FRYNTova1BulTcu6nQs=; b=FA9swdjVRR/3/5epHs85hL8iiLYnFqCg7OSPn2ROHpd/CsIH0UIFDqMFxBKT2++tj4 iVpCcGxGU7Hyxg/JD3m2RDky+Nx8xpp2YX63k02XjbjvEueeFgfFQ4ftVafwjfPyx5dL 9I51o6Sn6i9Ngv85cgCtDda0dw8Bbzq/3eBiydFqGuQCspWPFOcGN5JQylpaDUW+DKDS pbLPzRCU1BVEvxiv/GrkMA0aH2Q+ZkMkkEjN7yWlYY1M+OySN8pcS8eXEDjp/VvLqrNK cQUkU2zSZCVgqbn4FIZrMdBuBlODQfPCKJrqDEHGfN2ez5i1kTa1Zk05Qx4O7d3faPTH yGvA== X-Gm-Message-State: ALQs6tBlkt0Y7v7p8Y+TgWQ23Ddhsvxn9+kBOQ8XMDdbUg3VtqnrtWja 3SmxjjlvaTVSBIN0x2APS90fAXalc2uJg88P60c= X-Received: by 2002:a9d:1025:: with SMTP id h34-v6mr29675499ote.364.1525854638003; Wed, 09 May 2018 01:30:38 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a9d:1468:0:0:0:0:0 with HTTP; Wed, 9 May 2018 01:30:37 -0700 (PDT) In-Reply-To: <20180509080644.GA76874@joelaf.mtv.corp.google.com> References: <1525704215-8683-1-git-send-email-claudio@evidence.eu.com> <20180508065435.bcht6dyb3rpp6gk5@vireshk-i7> <20180509045425.GA158882@joelaf.mtv.corp.google.com> <20180509064530.GA1681@localhost.localdomain> <20180509080644.GA76874@joelaf.mtv.corp.google.com> From: "Rafael J. Wysocki" Date: Wed, 9 May 2018 10:30:37 +0200 X-Google-Sender-Auth: -99m4c__4xk5QoO46oQ-0yx5xs0 Message-ID: Subject: Re: [RFC PATCH] sched/cpufreq/schedutil: handling urgent frequency requests To: Joel Fernandes Cc: Juri Lelli , Viresh Kumar , Claudio Scordino , Linux Kernel Mailing List , "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Patrick Bellasi , Luca Abeni , Joel Fernandes , Linux PM Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 9, 2018 at 10:06 AM, Joel Fernandes wrote: > On Wed, May 09, 2018 at 08:45:30AM +0200, Juri Lelli wrote: >> On 08/05/18 21:54, Joel Fernandes wrote: >> >> [...] >> >> > Just for discussion sake, is there any need for work_in_progress? If we can >> > queue multiple work say kthread_queue_work can handle it, then just queuing >> > works whenever they are available should be Ok and the kthread loop can >> > handle them. __cpufreq_driver_target is also protected by the work lock if >> > there is any concern that can have races... only thing is rate-limiting of >> > the requests, but we are doing a rate limiting, just not for the "DL >> > increased utilization" type requests (which I don't think we are doing at the >> > moment for urgent DL requests anyway). >> > >> > Following is an untested diff to show the idea. What do you think? >> > >> > thanks, >> > >> > - Joel >> > >> > ----8<--- >> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c >> > index d2c6083304b4..862634ff4bf3 100644 >> > --- a/kernel/sched/cpufreq_schedutil.c >> > +++ b/kernel/sched/cpufreq_schedutil.c >> > @@ -38,7 +38,6 @@ struct sugov_policy { >> > struct mutex work_lock; >> > struct kthread_worker worker; >> > struct task_struct *thread; >> > - bool work_in_progress; >> > >> > bool need_freq_update; >> > }; >> > @@ -92,16 +91,8 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) >> > !cpufreq_can_do_remote_dvfs(sg_policy->policy)) >> > return false; >> > >> > - if (sg_policy->work_in_progress) >> > - return false; >> > - >> > if (unlikely(sg_policy->need_freq_update)) { >> > sg_policy->need_freq_update = false; >> > - /* >> > - * This happens when limits change, so forget the previous >> > - * next_freq value and force an update. >> > - */ >> > - sg_policy->next_freq = UINT_MAX; >> > return true; >> > } >> > >> > @@ -129,7 +120,6 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, >> > policy->cur = next_freq; >> > trace_cpu_frequency(next_freq, smp_processor_id()); >> > } else { >> > - sg_policy->work_in_progress = true; >> > irq_work_queue(&sg_policy->irq_work); >> >> Isn't this potentially introducing unneeded irq pressure (and doing the >> whole wakeup the kthread thing), while the already active kthread could >> simply handle multiple back-to-back requests before going to sleep? > > How about this? Will use the latest request, and also doesn't do unnecessary > irq_work_queue: > > (untested) > -----8<-------- > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index d2c6083304b4..6a3e42b01f52 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -38,7 +38,7 @@ struct sugov_policy { > struct mutex work_lock; > struct kthread_worker worker; > struct task_struct *thread; > - bool work_in_progress; > + bool work_in_progress; /* Has kthread been kicked */ > > bool need_freq_update; > }; > @@ -92,9 +92,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) > !cpufreq_can_do_remote_dvfs(sg_policy->policy)) > return false; > > - if (sg_policy->work_in_progress) > - return false; > - Why this change? Doing the below is rather pointless if work_in_progress is set, isn't it? You'll drop the results of it on the floor going forward anyway then AFAICS. > if (unlikely(sg_policy->need_freq_update)) { > sg_policy->need_freq_update = false; > /* > @@ -129,8 +126,11 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, > policy->cur = next_freq; > trace_cpu_frequency(next_freq, smp_processor_id()); > } else { > - sg_policy->work_in_progress = true; > - irq_work_queue(&sg_policy->irq_work); > + /* work_in_progress helps us not queue unnecessarily */ > + if (!sg_policy->work_in_progress) { > + sg_policy->work_in_progress = true; > + irq_work_queue(&sg_policy->irq_work); > + } > } > } > > @@ -381,13 +381,23 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags) > static void sugov_work(struct kthread_work *work) > { > struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); > + unsigned int freq; > + > + /* > + * Hold sg_policy->update_lock just enough to handle the case where: > + * if sg_policy->next_freq is updated before work_in_progress is set to > + * false, we may miss queueing the new update request since > + * work_in_progress would appear to be true. > + */ > + raw_spin_lock(&sg_policy->update_lock); > + freq = sg_policy->next_freq; > + sg_policy->work_in_progress = false; > + raw_spin_unlock(&sg_policy->update_lock); > > mutex_lock(&sg_policy->work_lock); > - __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, > + __cpufreq_driver_target(sg_policy->policy, freq, > CPUFREQ_RELATION_L); > mutex_unlock(&sg_policy->work_lock); > - > - sg_policy->work_in_progress = false; > } > > static void sugov_irq_work(struct irq_work *irq_work)