Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp2903016rdh; Wed, 27 Sep 2023 17:11:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEBAZXJ4HtLN5e821A098ILfhIHA1YTcJQ71HvX9UPM3BDtox2GDjePNoKSs3sIlSrT79qK X-Received: by 2002:a05:6358:93a4:b0:134:c859:d32a with SMTP id h36-20020a05635893a400b00134c859d32amr4299405rwb.25.1695859900644; Wed, 27 Sep 2023 17:11:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695859900; cv=none; d=google.com; s=arc-20160816; b=uEQ6LsDSldnaeJKxHXlq4zTwa1439iJt2eKr5IX4dk3R78Funmif4DoGf7jCiMubk9 Q5/8JRFTGfFn+k8Elgn5zYjASc3gjbmKTseFsjYWTP5McmKfywpkFaTOrmBvLnH04p9t 71qCkytyPi+9qKEVy+edatTeSMcuEgoTl8EplkHvtuwF8HuXkK+7sSXsgh8f6eacR623 EOn2VyBLyE+NE1MBdMjqscNTlyfwHLmx0v/azGGWR7/V1MKwPCUkZoiSmYxj3gs1PWum /KhPpCTY72q38CLv6VHYlaj9E8B2msvofGDv+txEiVqzGFpq1L5Si6XAGdvwrRH9yr2H SAbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:dkim-signature; bh=TS50nfKIBaR2f2dSDoZp+DMR8sQ+Npn+3rAuOaAUqa8=; fh=bs9cTrBnA8YK2Urd8PPm4LF3RAlxYfz0Vwubl8Xzo/E=; b=Na4deCTvdgnDxZcsqjPZChMEViKOq6wIjmCf28ZjXYePZi2SSTsfyXRKN8W5XbGzvJ mH5f1tsQDD7awROvHJYjmrgx0esQI1PLy0vhkD+pp3fb4G3i/X3ZDew9FuhtKApX3rA6 D+oCnfV2Iz/zRbj4PN+LI9xXuAOXRvNYayeVL4DxFrS82QywTiWTlWD3qy/acHQt5hf+ XzpxrUuIK99tqoIF3YKvaoYSd7V70fp59wPTP3611+QhdCj+EAeP+liPL1tfVnOulbDs KJS35eueJCKRuVr+YvuinVynvgI2Y2yqJcaxGtEDCZ5Rv9mrVl9PDHCNZYy/r79VO5lp kG0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ToFQ4X12; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id r10-20020a63fc4a000000b0056fed6cf408si16409352pgk.602.2023.09.27.17.11.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Sep 2023 17:11:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ToFQ4X12; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 0CFC4808738D; Wed, 27 Sep 2023 07:13:16 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232079AbjI0OND (ORCPT + 99 others); Wed, 27 Sep 2023 10:13:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232001AbjI0ONB (ORCPT ); Wed, 27 Sep 2023 10:13:01 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCE3E12A for ; Wed, 27 Sep 2023 07:12:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695823939; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TS50nfKIBaR2f2dSDoZp+DMR8sQ+Npn+3rAuOaAUqa8=; b=ToFQ4X12iwEqZC72NsdN8IfTWJzfIowXWM7kcFZ/kKvLHlztfS96srYoy/MatoDpBvBKBu PIE//vydQhSjh489d4EXvHMfIANnkAoEgjwsX4HznfYp2spgm8/oqOLQT/EQM8BEk4S29F bAqEJB4T+My57Iiz4O3cBTWNfu+cO6c= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-221-YQaQpYe-MO6q4VEqwqVypA-1; Wed, 27 Sep 2023 10:12:17 -0400 X-MC-Unique: YQaQpYe-MO6q4VEqwqVypA-1 Received: by mail-ej1-f72.google.com with SMTP id a640c23a62f3a-9ae52fbac1dso2206175366b.1 for ; Wed, 27 Sep 2023 07:12:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695823936; x=1696428736; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TS50nfKIBaR2f2dSDoZp+DMR8sQ+Npn+3rAuOaAUqa8=; b=NgacD3CecSAN9dBTe+3ZvIAldAnvHwYd61PK42oY/kTocXlMmrDZvSmvwoAXliKnL9 ODuCJdr/inhFwAz+a7WLAOcUNtuNyc+8B9zfd80xIYrXOpFHkkzHMjfx4fzbHUiCUYRy p0yl/e8lAbt04zQJVQY9t9gdvwUpt+Yj91fPFptHeSLd/h7DYNtHDbWnxz7dxRMkeaAh oVuW1uo+UKAg8PfBVrpakTIRSP4CZHASEqLja0eNz+hDgbcI2u1+vbMLhcif0RAH1+dU 5/eN2TyJcndE5WE8sKAZX9PUcX/sK2gRAGTZbfsrDHtcH319srCJB/dyNILPKkYE9w12 we5Q== X-Gm-Message-State: AOJu0YxINH1sBhB8A6C7rWOzXFkWL5+HTo0poTZJRxDunVJIeke0Qrb3 o/OEERX3PMST43foIZXHnU+kPkmiCIwxDAgKkip8YmVpbgrfL6+MJTC8id7bS8GK+jcbp5VZD16 xPoCHfvpJG9puOSw/41wcWqeGU62urQ+6 X-Received: by 2002:a17:907:7d9e:b0:9ad:8a96:ad55 with SMTP id oz30-20020a1709077d9e00b009ad8a96ad55mr9687734ejc.14.1695823936312; Wed, 27 Sep 2023 07:12:16 -0700 (PDT) X-Received: by 2002:a17:907:7d9e:b0:9ad:8a96:ad55 with SMTP id oz30-20020a1709077d9e00b009ad8a96ad55mr9687703ejc.14.1695823935966; Wed, 27 Sep 2023 07:12:15 -0700 (PDT) Received: from ?IPV6:2a02:810d:4b3f:de9c:642:1aff:fe31:a15c? ([2a02:810d:4b3f:de9c:642:1aff:fe31:a15c]) by smtp.gmail.com with ESMTPSA id qq23-20020a17090720d700b00993664a9987sm9329969ejb.103.2023.09.27.07.12.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 27 Sep 2023 07:12:15 -0700 (PDT) Message-ID: Date: Wed, 27 Sep 2023 16:12:14 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH drm-misc-next 1/3] drm/sched: implement dynamic job flow control Content-Language: en-US To: =?UTF-8?Q?Christian_K=c3=b6nig?= , Boris Brezillon Cc: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, faith.ekstrand@collabora.com, luben.tuikov@amd.com, dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, Donald Robson , Frank Binns , Sarah Walker References: <20230924224555.15595-1-dakr@redhat.com> <20230925145513.49abcc52@collabora.com> <20230926091129.2d7d7472@collabora.com> <390db8af-1510-580b-133c-dacf5adc56d1@amd.com> <5c6e1348-ec38-62b1-a056-1b7a724d99eb@redhat.com> <1f113c7b-975e-c72f-e6f0-ded55d10d64f@amd.com> From: Danilo Krummrich Organization: RedHat In-Reply-To: <1f113c7b-975e-c72f-e6f0-ded55d10d64f@amd.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.3 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Wed, 27 Sep 2023 07:13:16 -0700 (PDT) On 9/27/23 14:15, Christian König wrote: > Am 27.09.23 um 14:11 schrieb Danilo Krummrich: >> On 9/27/23 13:54, Christian König wrote: >>> Am 26.09.23 um 09:11 schrieb Boris Brezillon: >>>> On Mon, 25 Sep 2023 19:55:21 +0200 >>>> Christian König wrote: >>>> >>>>> Am 25.09.23 um 14:55 schrieb Boris Brezillon: >>>>>> +The imagination team, who's probably interested too. >>>>>> >>>>>> On Mon, 25 Sep 2023 00:43:06 +0200 >>>>>> Danilo Krummrich wrote: >>>>>>> Currently, job flow control is implemented simply by limiting the amount >>>>>>> of jobs in flight. Therefore, a scheduler is initialized with a >>>>>>> submission limit that corresponds to a certain amount of jobs. >>>>>>> >>>>>>> This implies that for each job drivers need to account for the maximum >>>>>>> job size possible in order to not overflow the ring buffer. >>>>>>> >>>>>>> However, there are drivers, such as Nouveau, where the job size has a >>>>>>> rather large range. For such drivers it can easily happen that job >>>>>>> submissions not even filling the ring by 1% can block subsequent >>>>>>> submissions, which, in the worst case, can lead to the ring run dry. >>>>>>> >>>>>>> In order to overcome this issue, allow for tracking the actual job size >>>>>>> instead of the amount job jobs. Therefore, add a field to track a job's >>>>>>> submission units, which represents the amount of units a job contributes >>>>>>> to the scheduler's submission limit. >>>>>> As mentioned earlier, this might allow some simplifications in the >>>>>> PowerVR driver where we do flow-control using a dma_fence returned >>>>>> through ->prepare_job(). The only thing that'd be missing is a way to >>>>>> dynamically query the size of a job (a new hook?), instead of having the >>>>>> size fixed at creation time, because PVR jobs embed native fence waits, >>>>>> and the number of native fences will decrease if some of these fences >>>>>> are signalled before ->run_job() is called, thus reducing the job size. >>>>> Exactly that is a little bit questionable since it allows for the device >>>>> to postpone jobs infinitely. >>>>> >>>>> It would be good if the scheduler is able to validate if it's ever able >>>>> to run the job when it is pushed into the entity. >>>> Yes, we do that already. We check that the immutable part of the job >>>> (everything that's not a native fence wait) fits in the ringbuf. >>> >>> Yeah, but thinking more about it there might be really bad side effects. We shouldn't use a callback nor job credits because it might badly influence fairness between entities. >>> >>> In other words when one entity submits always large jobs and another one always small ones then the scheduler would prefer the one which submits the smaller ones because they are easier to fit into the ring buffer. >> >> That's true. Admittedly I mostly had DRM_SCHED_POLICY_SINGLE_ENTITY​ in mind >> where obviously we just have a single entity. > > I also only stumbled over it after thinking Boris suggestions through. That problem really wasn't obvious. > >> >>> >>> What we can do is the follow: >>> 1. The scheduler has some initial credits it can use to push jobs. >>> 2. Each scheduler fence (and *not* the job) has a credits field of how much it will use. >>> 3. After letting a a job run the credits of it's fence are subtracted from the available credits of the scheduler. >>> 4. The scheduler can keep running jobs as long as it has a positive credit count. >>> 5. When the credit count becomes negative it goes to sleep until a scheduler fence signals and the count becomes positive again. >> >> Wouldn't it be possible that we overflow the ring with that or at least end up in >> a busy wait in run_job()? What if the remaining credit count is greater than 0 but >> smaller than the number of credits the next picked job requires? > > The initial credits the scheduler gets should be only halve the ring size. Ok, with that premise it works. I'd be fine with that, although this means that as soon as we hit RING_SIZE / 2 + 1 credits we don't push more stuff to the ring even if it would actually fit. > So as long as that is positive you should have enough space even for the biggest jobs. > > We should still have a warning if userspace tries to push something bigger into an entity. Well, if the driver thinks it's fine and it won't exceed the capacity once it hits run_job() it's probably fine thinking of PowerVR. However, we can have a warning when run_job() returns with more credits than we can handle. > > Regards, > Christian. > >> >>> >>> This way jobs are handled equally, you can still push jobs up to at least halve your ring buffer size and you should be able to handle your PowerVR case by calculating the credits you actually used in your run_job() callback. >>> >>> As far as I can see that approach should work, shouldn't it? >>> >>> Regards, >>> Christian. >>> >> >