Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp149436imm; Mon, 21 May 2018 03:52:45 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqNumwOKL4Dyuyj75A9bM9bMG28UZq2SuJ3fUbxxs1NdSDgoFyQYrgiK9jjbeb37H7aDg5b X-Received: by 2002:a17:902:8a8c:: with SMTP id p12-v6mr19475942plo.94.1526899965594; Mon, 21 May 2018 03:52:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526899965; cv=none; d=google.com; s=arc-20160816; b=mc5E4Ov/ILIMVmhhLgQpFhUKK9IPwE4KqExlmwftrlhgUXcY9gXNjh7BraavAkWKJ7 1X+yH5rjk57qUcXwpiLWg+vz/JkmTl5+eN28RpAEj1brSt8wWDajsfMjFkRZSzo/biBy uktfFsZyxaM5QL/w46szGfEWoOW8h2XfoQqngfXZTBecJ8vcKo1fZzXr/1eSfn6D5uh6 QO7dEftBn0xlckSh8xSNGrFiAq+uOdpBnI0oposRQ273ZaqOy5UfVsMYkoOWu3RdAjxv T5IOEenLFkCTrPL2J3tQCxD+VS9JbaM2l2P63mi4Cczo+bEt1zS1nCbw8Sq/kzC4Jd8s BlKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=BM3yFr7HCcqo7KDAJXMvsSM8C7r17urmIuYzrfyCEyE=; b=R/UxGf2SWR5RI/uLM1mDBKbF9kn/hFYjQdCr+D470sO15tN6TfpWxXrUhNm2HZx9+8 O3gId+2LKe1rZkL0EYPrWqONyaBUx82X71T/rp8uaaIWmd8fpwvxswM82MDTQFx/GrEW G+GzfTxcu8Gh8oRkgDuDpZ7h0utjy+worJRrMHEQCmotvlxn/2f+KR2RbBdpzys8hc+9 W1rQVDJNO7MeijnDiT0qjD5JIT2/SdtnNPyXmPtvhqLgM17qBC9d+S6OSh3005pAvkWC hmtdtVV4pQ/OoHbwZC2EvcQG4r0wHldalG5Y/q9m6TQN3qR/aTyIpMasYrKFrvumvxJk hr4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q1-v6si13889161plb.549.2018.05.21.03.52.31; Mon, 21 May 2018 03:52:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751905AbeEUKvE (ORCPT + 99 others); Mon, 21 May 2018 06:51:04 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46914 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750947AbeEUKvD (ORCPT ); Mon, 21 May 2018 06:51:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C04D1435; Mon, 21 May 2018 03:51:00 -0700 (PDT) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 57F663F25D; Mon, 21 May 2018 03:50:58 -0700 (PDT) Date: Mon, 21 May 2018 11:50:55 +0100 From: Patrick Bellasi To: "Joel Fernandes (Google.)" Cc: linux-kernel@vger.kernel.org, "Joel Fernandes (Google)" , Viresh Kumar , "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Juri Lelli , Luca Abeni , Todd Kjos , claudio@evidence.eu.com, kernel-team@android.com, linux-pm@vger.kernel.org Subject: Re: [PATCH v2] schedutil: Allow cpufreq requests to be made even when kthread kicked Message-ID: <20180521105055.GQ30654@e110439-lin> References: <20180518185501.173552-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180518185501.173552-1-joel@joelfernandes.org> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 18-May 11:55, Joel Fernandes (Google.) wrote: > From: "Joel Fernandes (Google)" > > Currently there is a chance of a schedutil cpufreq update request to be > dropped if there is a pending update request. This pending request can > be delayed if there is a scheduling delay of the irq_work and the wake > up of the schedutil governor kthread. > > A very bad scenario is when a schedutil request was already just made, > such as to reduce the CPU frequency, then a newer request to increase > CPU frequency (even sched deadline urgent frequency increase requests) > can be dropped, even though the rate limits suggest that its Ok to > process a request. This is because of the way the work_in_progress flag > is used. > > This patch improves the situation by allowing new requests to happen > even though the old one is still being processed. Note that in this > approach, if an irq_work was already issued, we just update next_freq > and don't bother to queue another request so there's no extra work being > done to make this happen. Maybe I'm missing something but... is not this patch just a partial mitigation of the issue you descrive above? If a DL freq increase is queued, with this patch we store the request but we don't actually increase the frequency until the next schedutil update, which can be one tick away... isn't it? If that's the case, maybe something like the following can complete the cure? ---8<--- #define SUGOV_FREQ_NONE 0 static unsigned int sugov_work_update(struct sugov_policy *sg_policy, unsigned int prev_freq) { unsigned long irq_flags; bool update_freq = true; unsigned int next_freq; /* * Hold sg_policy->update_lock shortly to handle the case where: * incase sg_policy->next_freq is read here, and then updated by * sugov_update_shared just before work_in_progress is set to false * here, we may miss queueing the new update. * * Note: If a work was queued after the update_lock is released, * sugov_work will just be called again by kthread_work code; and the * request will be proceed before the sugov thread sleeps. */ raw_spin_lock_irqsave(&sg_policy->update_lock, irq_flags); next_freq = sg_policy->next_freq; sg_policy->work_in_progress = false; if (prev_freq == next_freq) update_freq = false; raw_spin_unlock_irqrestore(&sg_policy->update_lock, irq_flags); /* * Update the frequency only if it has changed since the last call. */ if (update_freq) { mutex_lock(&sg_policy->work_lock); __cpufreq_driver_target(sg_policy->policy, next_freq, CPUFREQ_RELATION_L); mutex_unlock(&sg_policy->work_lock); return next_freq; } return SUGOV_FREQ_NONE; } static void sugov_work(struct kthread_work *work) { struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); unsigned int prev_freq = 0; /* * Keep updating the frequency until we end up with a frequency which * satisfies the most recent request we got meanwhile. */ do { prev_freq = sugov_work_update(sg_policy, prev_freq); } while (prev_freq != SUGOV_FREQ_NONE); } ---8<--- -- #include Patrick Bellasi