Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp316628pxu; Wed, 7 Oct 2020 04:04:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwPUXw5TF7GVEHR18SXQWsp1paqiQwp348wx51i2E9DmflzC0SpHe+B1XyGy8AjCozTN5Dg X-Received: by 2002:a17:906:3e48:: with SMTP id t8mr2738393eji.104.1602068659231; Wed, 07 Oct 2020 04:04:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602068659; cv=none; d=google.com; s=arc-20160816; b=DIGoFqHouACIdiIaF2jqgQjVj6s6KA0FNbOMSqhNSUhEUTvsj2Xv3PxuYoowFQGLgt ucBgZj2CgTZ6eloRULZLsJZIB1Rb1DnwLDZHg1qTDBZ01qy86i9LlA+oOj8YuwrelaFD JfJRxyXopYjd4wKdsNLbdVSVqEFwZwDxr2JCO/nnrjQ+kg8cVX6GswoIHvllFa5rNPlJ wHph0SdGD7Kk9ShBt3rylWJMMPt+n4y8k/sAEBssulmX90SJjN5vmZb5TpOivBpflvlC rTBO6Cp2zr1d4KxUfCKv89UJ35sBvR3ax0ZXbAe3cbCd7hLl73hMPecTqtrAV6xJlZ73 yFIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=6heGUXERBJ7NjYXuIfllmARc1cCGRxOE6dN738RDubU=; b=SDuanWQupFM8hZU4A3uaWRpWTNdCA3u1mE4SnR3m3u0vUXkTbGHeaNuSN1vWVSSnGy 9bAeqNRKDJqEIGiiHIBOdKscYLROEAK9hUphjbM7JDhyZOb2sVL+Jeufdl2NMcgzpDeg 3LgZcebYR1G4BOvJTHIt9lsMDAWrQ7KT51t20fLZRLm+1/eovl0NBrCrgPYN8Uzlpzmz aEDJI9D98nmJFv+qEGtvu/qdRmz4ZksiLcEOHMatBss9dXplsZB0l5uoBH6V3tHCvxJ/ wo5cz607hElbYaJflM7XQn0ue68eHL8q3h2u2aFXfImPYkNY9TpnM8X8/qAIXot7Mh9J WFJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r15si1144599edo.318.2020.10.07.04.03.49; Wed, 07 Oct 2020 04:04:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728064AbgJGKg6 (ORCPT + 99 others); Wed, 7 Oct 2020 06:36:58 -0400 Received: from foss.arm.com ([217.140.110.172]:41600 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727773AbgJGKg6 (ORCPT ); Wed, 7 Oct 2020 06:36:58 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D14D11B3; Wed, 7 Oct 2020 03:36:57 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1AEF33F71F; Wed, 7 Oct 2020 03:36:56 -0700 (PDT) Date: Wed, 7 Oct 2020 11:36:53 +0100 From: Qais Yousef To: Rob Clark Cc: dri-devel , linux-arm-msm , Tejun Heo , Tim Murray , Daniel Vetter , Rob Clark , open list , Steven Rostedt , "Peter Zijlstra (Intel)" Subject: Re: [PATCH v2 0/3] drm: commit_work scheduling Message-ID: <20201007103653.qjohhta7douhlb22@e107158-lin.cambridge.arm.com> References: <20200930211723.3028059-1-robdclark@gmail.com> <20201002110105.e56qrvzoqfioi4hs@e107158-lin.cambridge.arm.com> <20201005150024.mchfdtd62rlkuh4s@e107158-lin.cambridge.arm.com> <20201006105918.v3xspb6xasjyy5ky@e107158-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20171215 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/06/20 13:04, Rob Clark wrote: > On Tue, Oct 6, 2020 at 3:59 AM Qais Yousef wrote: > > > > On 10/05/20 16:24, Rob Clark wrote: > > > > [...] > > > > > > RT planning and partitioning is not easy task for sure. You might want to > > > > consider using affinities too to get stronger guarantees for some tasks and > > > > prevent cross-talking. > > > > > > There is some cgroup stuff that is pinning SF and some other stuff to > > > the small cores, fwiw.. I think the reasoning is that they shouldn't > > > be doing anything heavy enough to need the big cores. > > > > Ah, so you're on big.LITTLE type of system. I have done some work which enables > > biasing RT tasks towards big cores and control the default boost value if you > > have util_clamp and schedutil enabled. You can use util_clamp in general to > > help with DVFS related response time delays. > > > > I haven't done any work to try our best to pick a small core first but fallback > > to big if there's no other alternative. > > > > It'd be interesting to know how often you end up on a big core if you remove > > the affinity. The RT scheduler picks the first cpu in the lowest priority mask. > > So it should have this bias towards picking smaller cores first if they're > > in the lower priority mask (ie: not running higher priority RT tasks). > > fwiw, the issue I'm looking at is actually at the opposite end of the > spectrum, less demanding apps that let cpus throttle down to low > OPPs.. which stretches out the time taken at each step in the path > towards screen (which seems to improve the odds that we hit priority > inversion scenarios with SCHED_FIFO things stomping on important CFS > things) So you do have the problem of RT task preempting an important CFS task. > > There is a *big* difference in # of cpu cycles per frame between > highest and lowest OPP.. To combat DVFS related delays, you can use util clamp. Hopefully this article helps explain it if you didn't come across it before https://lwn.net/Articles/762043/ You can use sched_setattr() to set SCHED_FLAG_UTIL_CLAMP_MIN for a task. This will guarantee everytime this task is running it'll appear it has at least this utilization value, so schedutil governor (which must be used for this to work) will pick up the right performance point (OPP). The scheduler will try its best to make sure that the task will run on a core that meets the minimum requested performance point (hinted by setting uclamp_min). Thanks -- Qais Yousef