Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp183982pxu; Tue, 6 Oct 2020 04:03:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJziT7RyOxVE734d9m6eQwRNw4uC6O2WGZ7T12Pdr+7CEC3iGiRVTEEINGeWFWh0GvL/2Gqy X-Received: by 2002:a05:6402:8:: with SMTP id d8mr4550135edu.15.1601982207944; Tue, 06 Oct 2020 04:03:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601982207; cv=none; d=google.com; s=arc-20160816; b=f6H6l25lIcFF6J2zJyEJEEfWkaW3Z1EGLVgnyiqfzuDa8iSyZPSpKbBKHiL8umysfC Ja5fSvuBk03bGLkNWhR5oFGVv8EcaHqkIcyENIIZ4DPH3jkSFWmesmIhalmy/aX8IOlk YcSxaTQViiNr5P9RetaahB4Dql0eZQEzg4cScQX6hAGHV4a/sw3eVPrZY7hKK4LKQZ/1 mKeKxXK5Qce++0u3fT2qc6I9N+EWOVynWdFUmxY7Db1Z6L4qu/wLAelactDPI5F/0Jwf ufWutfQCu4uPtjarnIFeepi0fwOI+UhBVDVFWlH1iofUOSRFqOqYFunihmcz8bYFLlyZ 88LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=bfFds4PSlQqWessSI9lgeGUYuqwWlmcS9gMP/4cwQa4=; b=BqgEblTHpiMNQ7xF19QkTn9CfcZy+YAbwVGFVzkOJtmDuAyCIoy09pv15ns73vi5Ci XcV9/YgLRSy/LIiaLEB7k4N6KRPU0nJcUFBCKLYEFruh+VaIsRVQF9NT3NPfgV0mf/2B ExJDf3G3t6Do+zHnqpfjFLMtn5lTRXHTCdTLKufXsUWI+pLxADOvP7nj+LnrwHeXfo6H 4G4pX8EfA/JsQzjsgID88hPpjvMgCWuu8K9IWAmST1Meb69AFDUoW03+NQixh4IbZgce JG4MyED54UrR4OgB8ffpuy2J9etYuCaXFrDvFHh70eZ2DUcAbPrWiPxqRs5ZLGWUfNV6 FHag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r6si1753788eji.73.2020.10.06.04.03.04; Tue, 06 Oct 2020 04:03:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726060AbgJFK7K (ORCPT + 99 others); Tue, 6 Oct 2020 06:59:10 -0400 Received: from foss.arm.com ([217.140.110.172]:44582 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725891AbgJFK7K (ORCPT ); Tue, 6 Oct 2020 06:59:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74ECC1435; Tue, 6 Oct 2020 03:59:25 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 21C013F66B; Tue, 6 Oct 2020 03:59:24 -0700 (PDT) Date: Tue, 6 Oct 2020 11:59:21 +0100 From: Qais Yousef To: Rob Clark Cc: dri-devel , linux-arm-msm , Tejun Heo , Tim Murray , Daniel Vetter , Rob Clark , open list , Steven Rostedt , "Peter Zijlstra (Intel)" Subject: Re: [PATCH v2 0/3] drm: commit_work scheduling Message-ID: <20201006105918.v3xspb6xasjyy5ky@e107158-lin.cambridge.arm.com> References: <20200930211723.3028059-1-robdclark@gmail.com> <20201002110105.e56qrvzoqfioi4hs@e107158-lin.cambridge.arm.com> <20201005150024.mchfdtd62rlkuh4s@e107158-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20171215 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/05/20 16:24, Rob Clark wrote: [...] > > RT planning and partitioning is not easy task for sure. You might want to > > consider using affinities too to get stronger guarantees for some tasks and > > prevent cross-talking. > > There is some cgroup stuff that is pinning SF and some other stuff to > the small cores, fwiw.. I think the reasoning is that they shouldn't > be doing anything heavy enough to need the big cores. Ah, so you're on big.LITTLE type of system. I have done some work which enables biasing RT tasks towards big cores and control the default boost value if you have util_clamp and schedutil enabled. You can use util_clamp in general to help with DVFS related response time delays. I haven't done any work to try our best to pick a small core first but fallback to big if there's no other alternative. It'd be interesting to know how often you end up on a big core if you remove the affinity. The RT scheduler picks the first cpu in the lowest priority mask. So it should have this bias towards picking smaller cores first if they're in the lower priority mask (ie: not running higher priority RT tasks). So unless you absolutely don't want any RT tasks on a big cores, it'd be worth removing this affinity and check the percentage of time you spend on little cores. This should help with your worst case scenario as you make more cpus available. > > > run ASAP once fences are signalled, and vblank_work to run at a > > > slightly higher priority still. But the correct choice for priorities > > > here depends on what userspace is using, it all needs to fit together > > > properly. > > > > By userspace here I think you mean none display pipeline related RT tasks that > > you need to coexit with and could still disrupt your pipeline? > > I mean, commit_work should be higher priority than the other (display > related) RT tasks. But the kernel doesn't know what those priorities > are. So if you set commit_work to sched_set_fifo(), it'd be at a reasonably high priority (50) by default. Which means you just need to manage your SF priorities without having to change commit_work priority itself? > > > Using RT on Gerneral Purpose System is hard for sure. One of the major > > challenge is that there's no admin that has full view of the system to do > > proper RT planning. > > > > We need proper RT balancer daemon that helps partitioning the system for > > multiple RT apps on these systems.. > > > > > > > > > > > > > I do appreciate that maybe some of these tasks have varying requirements during > > > > their life time. e.g: they have RT property during specific critical section > > > > but otherwise are CFS tasks. I think the UI thread in Android behaves like > > > > that. > > > > > > > > It's worth IMO trying that approach I pointed out earlier to see if making RT > > > > try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be > > > > accepted but IMHO it's a better direction to consider and discuss. > > > > > > The problem I was seeing was actually the opposite.. commit_work > > > becomes runnable (fences signalled) but doesn't get a chance to run > > > because a SCHED_FIFO SF thread is running. (Maybe I misunderstood and > > > you're approach would help this case too?) > > > > Ah okay. Sorry I got it the wrong way around for some reason. I thought this > > task is preempting other CFS-based pipelined tasks. > > > > So your system seems to be overcomitted. Is SF short for SufraceFlinger? Under > > what scenarios do you have many SurfaceFlinger tasks? On Android I remember > > seeing they have priority of 1 or 2. > > yeah, SF==SurfaceFlinger, and yeah, 1 and 2.. > > > sched_set_fifo() will use priority 50. If you set all your pipeline tasks > > to this priority, what happens? > > I think this would work.. drm/msm doesn't use vblank work, so I > wouldn't really have problems with commit_work preempting vblank_work. > But I think the best option (and to handle the case if android changes > the RT priorties around in the future) is to let userspace set the > priorities. I don't really mind. But it seems better for me if we know that two kernel threads need to have a specific relative priorities to each others then to handle this in the kernel properly. Userspace will only need then to worry about managing its *own* priorities relative to that. Just seen Peter suggesting in another email to use SCHED_DEADLINE for vblank work. Which I think achieves the above if commit_work uses sched_set_fifo(). > > > > > > > > Or maybe you can wrap userspace pipeline critical section lock such that any > > > > task holding it will automatically be promoted to SCHED_FIFO and then demoted > > > > to CFS once it releases it. > > > > > > The SCHED_DEADLINE + token passing approach that the lwn article > > > mentioned sounds interesting, if that eventually becomes possible. > > > But doesn't really help today.. > > > > We were present in the room with Alessio when he gave that talk :-) > > > > You might have seen Valentin's talk in LPC where he's trying to get > > proxy-execution into shape. Which is a pre-requisite to enable using of > > SCHED_DEADLINE for these scenarios. IIRC it should allow all dependent tasks to > > run from the context of the deadline task during the display pipeline critical > > section. > > > > By the way, do you have issues with SoftIrqs delaying your RT tasks execution > > time? > > I don't *think* so, but I'm not 100% sure if they are showing up in If you ever get a chance to run a high network throughput test, it might help to see if softirqs are affecting you. I know Android has issues with this under some circumstances. > traces. So far it seems like SF stomping on commit_work. (There is > the added complication that there are some chrome gpu-process tasks in > between SF and the display, including CrGpuMain (which really doesn't > want to be SCHED_FIFO when executing gl commands on behalf of > something unrelated to the compositor.. the deadline approach, IIUC, > might be the better option eventually for this?) If you meant sched_deadline + token approach, then yeah I think it'd be better. But as you said, we can't do this yet :/ But as Peter pointed out, this doesn't mean you can't use SCHED_DEADLINE at all if it does make sense. Thanks -- Qais Yousef