Received: by 2002:a25:b323:0:0:0:0:0 with SMTP id l35csp302316ybj; Thu, 19 Sep 2019 14:42:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqzCITYHdPkJF9K9GcGvCRZMb7gci5qnQxRbjYOefVKvCTInJr1CzLex/GtGISlptF1uVmuJ X-Received: by 2002:a50:935d:: with SMTP id n29mr18376611eda.294.1568929330737; Thu, 19 Sep 2019 14:42:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568929330; cv=none; d=google.com; s=arc-20160816; b=Dkcpm0aSEFMCzYYb5AXyk9jaz/Jn+HjFwK6g0J63tG+lzGrsUEfhybUmUxnyn2kZeL W3/kfVDB0q3Fy8yLnC/opkz21zHf/a/QwGOWRStOUQ2KefX36e15C9Gk9yYsGlA6KEiQ 0SoQwjTz+ig+a2ZP+JYbqKK1uP2CJ/+hijunVhT16OeRDWcN+vQFICZdQZZy515Rbbxa Atp92BpG71brJrBw3lQaxt6AA9wKs46iVsr01diO4DgQHA5uz8Wa6FOYaA2q/l0eLvCu TqO7XeOWFSri/ord/mWKyPQED/xVxvmr44evGc5OQXsHKvU8QpBKvWWsjke6LXa388b9 eENA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=LsCQjbzWfPJdf/KMGdwLugrLJujprdIq1bNH8QKmJn4=; b=RmdrprDtvErNN7ymPDcTnuCzuUKfzVCA8zA4umfPKNcu0d1Qe7JKJ1mcaOC+zaVeb4 ZF8hN2jqw4wIK3Rkw3XEehV/bkbrO6XzFdO/184KyQz8+XN/GoADoC7p3Ef1qgdMWa1A O58mXAZ6Y7WZJLQzuMVNcKgkGMivnOiONXj5cip3Fq0whrZS0SvHBslJApBZm7EXC+h/ cWMiZjNrCaPkCeZLG7HmqVgIv5BlnWjmiXQKt455kgPv2OMn9hL5M10lJhoHjFjEx7YU eV8o40sIgq9Iv/3LYEt25xP4ZLCwNdmC8NLU00povJZ42qVB7I9iiDkB/JZ1nrqkruoD BBYA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s27si22882edm.226.2019.09.19.14.41.47; Thu, 19 Sep 2019 14:42:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390434AbfISOXX (ORCPT + 99 others); Thu, 19 Sep 2019 10:23:23 -0400 Received: from foss.arm.com ([217.140.110.172]:59234 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389303AbfISOXV (ORCPT ); Thu, 19 Sep 2019 10:23:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7D830337; Thu, 19 Sep 2019 07:23:20 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.194.52]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3F4F93F575; Thu, 19 Sep 2019 07:23:19 -0700 (PDT) Date: Thu, 19 Sep 2019 15:23:16 +0100 From: Qais Yousef To: Vincent Guittot Cc: Jing-Ting Wu , Valentin Schneider , Peter Zijlstra , Matthias Brugger , wsd_upstream@mediatek.com, linux-kernel , LAK , linux-mediatek@lists.infradead.org Subject: Re: [PATCH 1/1] sched/rt: avoid contend with CFS task Message-ID: <20190919142315.vmrrpvljpspqpurp@e107158-lin.cambridge.arm.com> References: <1567048502-6064-1-git-send-email-jing-ting.wu@mediatek.com> <20190830145501.zadfv2ffuu7j46ft@e107158-lin.cambridge.arm.com> <1567689999.2389.5.camel@mtkswgap22> <1568892135.4892.10.camel@mtkswgap22> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20171215 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/19/19 14:27, Vincent Guittot wrote: > > > > But for requirement of performance, I think it is better to differentiate between idle CPU and CPU has CFS task. > > > > > > > > For example, we use rt-app to evaluate runnable time on non-patched environment. > > > > There are (NR_CPUS-1) heavy CFS tasks and 1 RT Task. When a CFS task is running, the RT task wakes up and choose the same CPU. > > > > The CFS task will be preempted and keep runnable until it is migrated to another cpu by load balance. > > > > But load balance is not triggered immediately, it will be triggered until timer tick hits with some condition satisfied(ex. rq->next_balance). > > > > > > Yes you will have to wait for the next tick that will trigger an idle > > > load balance because you have an idle cpu and 2 runnable tack (1 RT + > > > 1CFS) on the same CPU. But you should not wait for more than 1 tick > > > > > > The current load_balance doesn't handle correctly the situation of 1 > > > CFS and 1 RT task on same CPU while 1 CPU is idle. There is a rework > > > of the load_balance that is under review on the mailing list that > > > fixes this problem and your CFS task should migrate to the idle CPU > > > faster than now > > > > > > > Period load balance should be triggered when current jiffies is behind > > rq->next_balance, but rq->next_balance is not often exactly the same > > with next tick. > > If cpu_busy, interval = sd->balance_interval * sd->busy_factor, and > > But if there is an idle CPU on the system, the next idle load balance > should apply shortly because the busy_factor is not used for this CPU > which is not busy. > In this case, the next_balance interval is sd_weight which is probably > 4ms at cluster level and 8ms at system level in your case. This means > between 1 and 2 ticks But if the CFS task we're preempting was latency sensitive - this 1 or 2 tick is too late of a recovery. So while it's good we recover, but a preventative approach would be useful too. Just saying :-) I'm still not sure if this is the best longer term approach. -- Qais Yousef