Received: by 10.223.176.5 with SMTP id f5csp664842wra; Fri, 9 Feb 2018 05:21:32 -0800 (PST) X-Google-Smtp-Source: AH8x227OSVgAlpftVpTyWYL2SSguH/F6nzo+D0mfxN387b8VHwTtm8KqhRR1s5DuLLlP1o2wRdmz X-Received: by 10.98.219.68 with SMTP id f65mr2780730pfg.25.1518182491943; Fri, 09 Feb 2018 05:21:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518182491; cv=none; d=google.com; s=arc-20160816; b=EzoeekxjgWyvg0FxLKa0mKCpuRMMGb0vYLt9ZhXEk/yJh6HWAhSNaUv9tBbD2tGQ1+ u8XNZon1/x0kkTZrW9fVs+8J6R4MJJLHr1YYyCX1RGcj+2tYDXRQBiBV4AaxCkmhepE6 OqOZAxyZWnAkGtllUrWVxlseJOo9GEbsb4Ma3C3Xi6ghQ/2tplz/nnvK1bA4TooJ4OTY MZEN8Dic8whco/iXbUX7WZCv3TECXFxGS/Z+OlFEoNHLM4J/ACbGSTvvEQfzimYX+3Ai limYQvK2R0EV8owlAXAvlfRtKZuYwzZV2ijhtlXsdu7PesUSwBZp3ez+uaMr2hjac94c JvVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=YYrbnEGU/T8+EvpDXFXDu31EgaJXHpW0duR7OjdJ1lA=; b=U6XcQWqfDG8NtknHRBt58OrSxwhMOptP3iUAIYJG88Za05w2VfhpMLl/oOSaCEWBQX 96xbpTzVmypsQ5ovpKsIgysM6xIZ07Hi5D0gHjk+VZxv2uxphHd0sGurcGYsV1Xm1UBm leSiBNkKFrBnaM/+ffpAndKsy9Fa5Z95YGYaG2rjYE9+j2CA21H2tg4vyL2tBw97PPts BF/XxdqSP6IMfaiQz9B5fiK8gx9xqTJRtkprOz6mPW8gD2fYRb6ObTobpHjsUgaas1Vk 7wUKV05HD6YoJRSoB+B/tkVMlZWo2AIK5ooW6tMYV4B/tw5QJGQeLld58Ybd2tYYVUOU Lg8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@evidence-eu-com.20150623.gappssmtp.com header.s=20150623 header.b=wc6Z+oa3; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t25si1375907pge.368.2018.02.09.05.21.17; Fri, 09 Feb 2018 05:21:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@evidence-eu-com.20150623.gappssmtp.com header.s=20150623 header.b=wc6Z+oa3; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751126AbeBINUh (ORCPT + 99 others); Fri, 9 Feb 2018 08:20:37 -0500 Received: from mail-wr0-f172.google.com ([209.85.128.172]:43663 "EHLO mail-wr0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751041AbeBINUf (ORCPT ); Fri, 9 Feb 2018 08:20:35 -0500 Received: by mail-wr0-f172.google.com with SMTP id b52so8189748wrd.10 for ; Fri, 09 Feb 2018 05:20:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=evidence-eu-com.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=YYrbnEGU/T8+EvpDXFXDu31EgaJXHpW0duR7OjdJ1lA=; b=wc6Z+oa3LdEgC0mfzh6fqvKhXngkJTqDHK0gX1KKMERNELEMmtY1gxUlCsvxbr8lBa 7V3aWr3FQc4LTqL4r7EbnOuXoaXI+0pemaCShoVULwEeZb7NjCT1FNFesDfGPLiccAKM AmA5X7x0ktuikpAPkE5MPfn4T9uO21nzVdz5sm2XoiazCDMN5Fsk5MtX6i1g07ECeJup J8ct43hjid/m5B1mFiAawf6AnZtJAb9De0OQRthK6j6UkeAkNrgaRK+L8kLDJf8sIvQ6 mtkhXamuT4bO2VkONYS+gDIiGYlLSmnsl0QYIrb+yW8V8ZS0AXiaijjp/4B+Ls0dD9GL jQpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=YYrbnEGU/T8+EvpDXFXDu31EgaJXHpW0duR7OjdJ1lA=; b=ffTq50doDV3UEVE1L1j2QXlioOyoLcWkz7m5Dlm/g12bRPKzn4Jf3E4PYG1L7CEVrf UGQuG5kAi+wq0xi/ssCMh9JZaHxrhnaxUSP25mXElDzswhi4YnGJkn0YF5utAXbvQGBT s93pKuIbXz20+aD3RWMNL1JbRM4V35q5corXIDIxexy9Y55I6ypNgIJg9g9tHG4kBzoG 6IsHHv/Zl6PtXIiruufgE0k6oUijBQeLaPtxM+4HkN7SV/karHKlJqr95bYEURyI+Wgd jIaHbJqiZbjqsBLWmPtPe4yJpZmm54zrSPXhXENuPK8Tv/ZfOYHVB4Usj8Xzai7Q86qO XDIA== X-Gm-Message-State: APf1xPC1d24YAmP/JfO4ogyb+lURmJ/PKZE8ZertJwRtjyl/IBBzegns wzcu5hJGLqlm/QmrXTxD+z+DU/rD X-Received: by 10.223.169.182 with SMTP id b51mr2336143wrd.244.1518182434056; Fri, 09 Feb 2018 05:20:34 -0800 (PST) Received: from [192.168.10.160] (host92-93-static.8-79-b.business.telecomitalia.it. [79.8.93.92]) by smtp.gmail.com with ESMTPSA id 123sm2829290wmt.31.2018.02.09.05.20.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Feb 2018 05:20:33 -0800 (PST) Subject: Re: [PATCH] cpufreq: schedutil: rate limits for SCHED_DEADLINE To: "Rafael J. Wysocki" , Juri Lelli Cc: "Rafael J. Wysocki" , Viresh Kumar , Peter Zijlstra , Ingo Molnar , "Rafael J . Wysocki" , Patrick Bellasi , Dietmar Eggemann , Morten Rasmussen , Vincent Guittot , Todd Kjos , Joel Fernandes , Linux PM , Linux Kernel Mailing List References: <1518109302-8239-1-git-send-email-claudio@evidence.eu.com> <20180209035143.GX28462@vireshk-i7> <197c26ba-c2a6-2de7-dffa-5b884079f746@evidence.eu.com> <11598161.veS9VGWB8G@aspire.rjw.lan> <20180209105305.GD12979@localhost.localdomain> <20180209112618.GE12979@localhost.localdomain> <20180209115155.GG12979@localhost.localdomain> <20180209125245.GH12979@localhost.localdomain> From: Claudio Scordino Message-ID: <45eb0110-f06e-c9c8-ad0b-16349976ffa3@evidence.eu.com> Date: Fri, 9 Feb 2018 14:20:32 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Il 09/02/2018 13:56, Rafael J. Wysocki ha scritto: > On Fri, Feb 9, 2018 at 1:52 PM, Juri Lelli wrote: >> On 09/02/18 13:08, Rafael J. Wysocki wrote: >>> On Fri, Feb 9, 2018 at 12:51 PM, Juri Lelli wrote: >>>> On 09/02/18 12:37, Rafael J. Wysocki wrote: >>>>> On Fri, Feb 9, 2018 at 12:26 PM, Juri Lelli wrote: >>>>>> On 09/02/18 12:04, Rafael J. Wysocki wrote: >>>>>>> On Fri, Feb 9, 2018 at 11:53 AM, Juri Lelli wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> On 09/02/18 11:36, Rafael J. Wysocki wrote: >>>>>>>>> On Friday, February 9, 2018 9:02:34 AM CET Claudio Scordino wrote: >>>>>>>>>> Hi Viresh, >>>>>>>>>> >>>>>>>>>> Il 09/02/2018 04:51, Viresh Kumar ha scritto: >>>>>>>>>>> On 08-02-18, 18:01, Claudio Scordino wrote: >>>>>>>>>>>> When the SCHED_DEADLINE scheduling class increases the CPU utilization, >>>>>>>>>>>> we should not wait for the rate limit, otherwise we may miss some deadline. >>>>>>>>>>>> >>>>>>>>>>>> Tests using rt-app on Exynos5422 have shown reductions of about 10% of deadline >>>>>>>>>>>> misses for tasks with low RT periods. >>>>>>>>>>>> >>>>>>>>>>>> The patch applies on top of the one recently proposed by Peter to drop the >>>>>>>>>>>> SCHED_CPUFREQ_* flags. >>>>>>>>>>>> >>>>>>>>> >>>>>>>>> [cut] >>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Is it possible to (somehow) check here if the DL tasks will miss >>>>>>>>>>> deadline if we continue to run at current frequency? And only ignore >>>>>>>>>>> rate-limit if that is the case ? >>>>>>>> >>>>>>>> Isn't it always the case? Utilization associated to DL tasks is given by >>>>>>>> what the user said it's needed to meet a task deadlines (admission >>>>>>>> control). If that task wakes up and we realize that adding its >>>>>>>> utilization contribution is going to require a frequency change, we >>>>>>>> should _theoretically_ always do it, or it will be too late. Now, user >>>>>>>> might have asked for a bit more than what strictly required (this is >>>>>>>> usually the case to compensate for discrepancies between theory and real >>>>>>>> world, e.g. hw transition limits), but I don't think there is a way to >>>>>>>> know "how much". :/ >>>>>>> >>>>>>> You are right. >>>>>>> >>>>>>> I'm somewhat concerned about "fast switch" cases when the rate limit >>>>>>> is used to reduce overhead. >>>>>> >>>>>> Mmm, right. I'm thinking that in those cases we could leave rate limit >>>>>> as is. The user should then be aware of it and consider it as proper >>>>>> overhead when designing her/his system. >>>>>> >>>>>> But then, isn't it the same for "non fast switch" platforms? I mean, >>>>>> even in the latter case we can't go faster than hw limits.. mmm, maybe >>>>>> the difference is that in the former case we could go as fast as theory >>>>>> would expect.. but we shouldn't. :) >>>>> >>>>> Well, in practical terms that means "no difference" IMO. :-) >>>>> >>>>> I can imagine that in some cases this approach may lead to better >>>>> results than reducing the rate limit overall, but the general case I'm >>>>> not sure about. >>>>> >>>>> I mean, if overriding the rate limit doesn't take place very often, >>>>> then it really should make no difference overhead-wise. Now, of >>>>> course, how to define "not very often" is a good question as that >>>>> leads to rate-limiting the overriding of the original rate limit and >>>>> that scheme may continue indefinitely ... >>>> >>>> :) >>>> >>>> My impression is that rate limit helps a lot for CFS, where the "true" >>>> utilization is not known in advance, and being too responsive might >>>> actually be counterproductive. >>>> >>>> For DEADLINE (and RT, with differences) we should always respond as >>>> quick as we can (and probably remember that a frequency transition was >>>> requested if hw was already performing one, but that's another patch) >>>> because, if we don't, a task belonging to a lower priority class might >>>> induce deadline misses in highest priority activities. E.g., a CFS task >>>> that happens to trigger a freq switch right before a DEADLINE task wakes >>>> up and needs an higher frequency to meet its deadline: if we wait for >>>> the rate limit of the CFS originated transition.. deadline miss! >>> >>> Fair enough, but if there's too much overhead as a result of this, you >>> can't guarantee the deadlines to be met anyway. >> >> Indeed. I guess this only works if corner cases as the one above don't >> happen too often. > > Well, that's the point. > > So there is a tradeoff: do we want to allow deadlines to be missed > because of excessive overhead or do we want to allow deadlines to be > missed because of the rate limit. For a very few tasks, the tests have indeed shown that the approach pays off: we get a significant reduction of misses with a negligible increase of energy consumption. I still need to check what happens for a high amount of tasks, trying to reproduce the "ramp up" pattern (in which DL keeps increasing the utilization, ignoring the rate limit and adding overhead) Thanks, Claudio