Received: by 2002:a05:7412:6592:b0:d7:7d3a:4fe2 with SMTP id m18csp2514798rdg; Mon, 14 Aug 2023 05:14:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHmK/As94YgM5d5U/S+O2G2BITaQQLlh4BYGYsXIyuRHr9RGSuK4i5MiGxFJvMUsGpM+Toj X-Received: by 2002:a17:902:e891:b0:1b9:e091:8037 with SMTP id w17-20020a170902e89100b001b9e0918037mr12969967plg.30.1692015269416; Mon, 14 Aug 2023 05:14:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692015269; cv=none; d=google.com; s=arc-20160816; b=HYw80xEPOlf/c4OpPwpaEPVUc6/CvVJFDrjHWdA3KY21Y1R+ZeAqUdPNl2cH2ze6Vw 8VMwGX5h12rqt6FqUcvkldChuA/wVgkFPSr0TrwZlWoMHbrh6jlCFvo44pjnQjLhSmGz 2vZ2FbjD6hhG1vZowF6RkIh89ad6vMrWa5umb0i4uQxjwruS6uktxQkDONQ4Qdyu1vM4 Dv5gULi+RvxaKmq0rX4wDq6CcDVd7n5ikGiaVP2vJqesu6LlOSgyWVSYHTDWyReCHiZU ycstC76Tx/xOQci8aGSFoQH6/wLj6TZMkXB6UlGtgqL74MYaLlJMIUbC3SRxT60tvM14 8J+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=bgj2QR7EK023RgjkIzlZ1RFC1waXxn/YoYL6wz9t1wk=; fh=MpcCp3w5aX+P4c3ARtKlF2i7HvHZ+Dj3Mnu7Q8eaVqQ=; b=wg+eNGPaKv/3vuKcR/MChqtTexj+9DhcczTO84FPImAo17YiZxxRvubtLrvSA7bQ9A TfgZDUsHnwrTAwdOu9OpWDn6+k/tXVelXeNnzUGhlGaldL85qbSP6Q++YnAQFJ0F6doL 6nan0tgFdBm51/tcZAtbvDOYLUJFp0QLAdqU1r/Vdaueh5cFmGiCyiboJqUcJUnF6FFd MALcfBh/WdZZ0Z1/KwA/ZEJJGX4HHtglWlD8WvZuDoBf0hL3eGzXoGGmsMrH/f+6j7U+ dLTR3cm0kPfjpDDBmsgy6IyOMTS4O0ktptBXwRcH8niVRCzN6IluFj25V2ZAVhDvLZO0 TZvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VTnI4L10; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n4-20020a170902d2c400b001b85ab48092si8326819plc.499.2023.08.14.05.14.15; Mon, 14 Aug 2023 05:14:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VTnI4L10; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231194AbjHNKWM (ORCPT + 99 others); Mon, 14 Aug 2023 06:22:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234430AbjHNKVk (ORCPT ); Mon, 14 Aug 2023 06:21:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BE6FBD for ; Mon, 14 Aug 2023 03:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692008499; x=1723544499; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=29wof/6NpjSzrKT6qzH2JPg0H1YIgtNF7q2e72lMGLE=; b=VTnI4L10OtRagMjA976bE3f7o6x7IECxqgWZ00RABAV51Tg7KwxhrVcz o9W8fFzapxQEeonwVI6jUQnEHFEMHywAHN3l0iD1/oGkDAhWnHElZOO79 YjNOIqZfrh+s6ead8gUoDEy/9/jBnMlgOGD50w1Df1U1cpnP/WO2PsBCv CVSDbEFYKGOmMno3P+qgyt0zMHTWAhItdnRxMPyMFzf6B9rgLAQKseG/W AmeYOQ2yGohdakkrLb9TciWk8G/NWnxwX1TmC3yz8Q1dMhfzd6Z2rXxLu mz3SG7758KMDgNcA08z4SFPqIF4zBfOkNgqqW0E17tJtGmeaDZ35mhggb g==; X-IronPort-AV: E=McAfee;i="6600,9927,10801"; a="458369951" X-IronPort-AV: E=Sophos;i="6.01,172,1684825200"; d="scan'208";a="458369951" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2023 03:21:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10801"; a="736484654" X-IronPort-AV: E=Sophos;i="6.01,172,1684825200"; d="scan'208";a="736484654" Received: from guptan-mobl2.gar.corp.intel.com (HELO [10.249.254.26]) ([10.249.254.26]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2023 03:21:36 -0700 Message-ID: Date: Mon, 14 Aug 2023 12:21:33 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH v5] Documentation/gpu: Add a VM_BIND async draft document Content-Language: en-US To: Danilo Krummrich Cc: Matthew Brost , Francois Dugast , linux-kernel@vger.kernel.org, Oak Zeng , dri-devel@lists.freedesktop.org, Nirmoy Das , intel-xe@lists.freedesktop.org References: <20230715154543.13183-1-thomas.hellstrom@linux.intel.com> <955bc56a-6cfa-447a-31a9-2b35d8b23149@redhat.com> From: =?UTF-8?Q?Thomas_Hellstr=c3=b6m?= In-Reply-To: <955bc56a-6cfa-447a-31a9-2b35d8b23149@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-6.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 7/19/23 16:24, Danilo Krummrich wrote: > On 7/15/23 17:45, Thomas Hellström wrote: >> Add a motivation for and description of asynchronous VM_BIND operation >> >> v2: >> - Fix typos (Nirmoy Das) >> - Improve the description of a memory fence (Oak Zeng) >> - Add a reference to the document in the Xe RFC. >> - Add pointers to sample uAPI suggestions >> v3: >> - Address review comments (Danilo Krummrich) >> - Formatting fixes >> v4: >> - Address typos (Francois Dugast) >> - Explain why in-fences are not allowed for VM_BIND operations for long- >>    running workloads (Matthew Brost) >> v5: >> - More typo- and style fixing >> - Further clarify the implications of disallowing in-fences for VM_BIND >>    operations for long-running workloads (Matthew Brost) >> >> Signed-off-by: Thomas Hellström >> Acked-by: Nirmoy Das >> --- >>   Documentation/gpu/drm-vm-bind-async.rst | 171 ++++++++++++++++++++++++ >>   Documentation/gpu/rfc/xe.rst            |   4 +- >>   2 files changed, 173 insertions(+), 2 deletions(-) >>   create mode 100644 Documentation/gpu/drm-vm-bind-async.rst >> >> diff --git a/Documentation/gpu/drm-vm-bind-async.rst >> b/Documentation/gpu/drm-vm-bind-async.rst >> new file mode 100644 >> index 000000000000..d2b02a38198a >> --- /dev/null >> +++ b/Documentation/gpu/drm-vm-bind-async.rst >> @@ -0,0 +1,171 @@ >> +==================== >> +Asynchronous VM_BIND >> +==================== >> + >> +Nomenclature: >> +============= >> + >> +* ``VRAM``: On-device memory. Sometimes referred to as device local >> memory. >> + >> +* ``gpu_vm``: A GPU address space. Typically per process, but can be >> shared by >> +  multiple processes. > > Again, pretty obvious, but I suggest to be explicit "GPU virtual > address space". > > Also, you might want to remove "draft" from the patch subject. > > Otherwise: Reviewed-by: Danilo Krummrich Sure. Thanks for reviewing, Danilo. > >> + >> +* ``VM_BIND``: An operation or a list of operations to modify a >> gpu_vm using >> +  an IOCTL. The operations include mapping and unmapping system- or >> +  VRAM memory. >> + >> +* ``syncobj``: A container that abstracts synchronization objects. The >> +  synchronization objects can be either generic, like dma-fences or >> +  driver specific. A syncobj typically indicates the type of the >> +  underlying synchronization object. >> + >> +* ``in-syncobj``: Argument to a VM_BIND IOCTL, the VM_BIND operation >> waits >> +  for these before starting. >> + >> +* ``out-syncobj``: Argument to a VM_BIND_IOCTL, the VM_BIND operation >> +  signals these when the bind operation is complete. >> + >> +* ``memory fence``: A synchronization object, different from a >> dma-fence. >> +  A memory fence uses the value of a specified memory location to >> determine >> +  signaled status. A memory fence can be awaited and signaled by both >> +  the GPU and CPU. Memory fences are sometimes referred to as >> +  user-fences, userspace-fences or gpu futexes and do not >> necessarily obey >> +  the dma-fence rule of signaling within a "reasonable amount of time". >> +  The kernel should thus avoid waiting for memory fences with locks >> held. >> + >> +* ``long-running workload``: A workload that may take more than the >> +  current stipulated dma-fence maximum signal delay to complete and >> +  which therefore needs to set the gpu_vm or the GPU execution >> context in >> +  a certain mode that disallows completion dma-fences. >> + >> +* ``exec function``: An exec function is a function that revalidates >> all >> +  affected gpu_vmas, submits a GPU command batch and registers the >> +  dma_fence representing the GPU command's activity with all affected >> +  dma_resvs. For completeness, although not covered by this document, >> +  it's worth mentioning that an exec function may also be the >> +  revalidation worker that is used by some drivers in compute / >> +  long-running mode. >> + >> +* ``bind context``: A context identifier used for the VM_BIND >> +  operation. VM_BIND operations that use the same bind context can be >> +  assumed, where it matters, to complete in order of submission. No >> such >> +  assumptions can be made for VM_BIND operations using separate bind >> contexts. >> + >> +* ``UMD``: User-mode driver. >> + >> +* ``KMD``: Kernel-mode driver. >> + >> + >> +Synchronous / Asynchronous VM_BIND operation >> +============================================ >> + >> +Synchronous VM_BIND >> +___________________ >> +With Synchronous VM_BIND, the VM_BIND operations all complete before >> the >> +IOCTL returns. A synchronous VM_BIND takes neither in-fences nor >> +out-fences. Synchronous VM_BIND may block and wait for GPU operations; >> +for example swap-in or clearing, or even previous binds. >> + >> +Asynchronous VM_BIND >> +____________________ >> +Asynchronous VM_BIND accepts both in-syncobjs and out-syncobjs. >> While the >> +IOCTL may return immediately, the VM_BIND operations wait for the >> in-syncobjs >> +before modifying the GPU page-tables, and signal the out-syncobjs when >> +the modification is done in the sense that the next exec function that >> +awaits for the out-syncobjs will see the change. Errors are reported >> +synchronously assuming that the asynchronous part of the job never >> errors. >> +In low-memory situations the implementation may block, performing the >> +VM_BIND synchronously, because there might not be enough memory >> +immediately available for preparing the asynchronous operation. >> + >> +If the VM_BIND IOCTL takes a list or an array of operations as an >> argument, >> +the in-syncobjs needs to signal before the first operation starts to >> +execute, and the out-syncobjs signal after the last operation >> +completes. Operations in the operation list can be assumed, where it >> +matters, to complete in order. >> + >> +Since asynchronous VM_BIND operations may use dma-fences embedded in >> +out-syncobjs and internally in KMD to signal bind completion, any >> +memory fences given as VM_BIND in-fences need to be awaited >> +synchronously before the VM_BIND ioctl returns, since dma-fences, > > IOCTL > >> +required to signal in a reasonable amount of time, can never be made >> +to depend on memory fences that don't have such a restriction. >> + >> +To aid in supporting user-space queues, the VM_BIND may take a bind >> context. >> + >> +The purpose of an Asynchronous VM_BIND operation is for user-mode >> +drivers to be able to pipeline interleaved gpu_vm modifications and >> +exec functions. For long-running workloads, such pipelining of a bind >> +operation is not allowed and any in-fences need to be awaited >> +synchronously. The reason for this is twofold. First, any memory >> +fences gated by a long-running workload and used as in-syncobjs for the >> +VM_BIND operation will need to be awaited synchronously anyway (see >> +above). Second, any dma-fences used as in-syncobjs for VM_BIND >> +operations for long-running workloads will not allow for pipelining >> +anyway since long-running workloads don't allow for dma-fences as >> +out-syncobjs, so while theoretically possible the use of them is >> +questionable and should be rejected until there is a valuable use-case. >> +Note that this is not a limitation imposed by dma-fence rules, but >> +rather a limitation imposed to keep KMD implementation simple. It does >> +not affect using dma-fences as dependencies for the long-running >> +workload itself, which is allowed by dma-fence rules, but rather for >> +the VM_BIND operation only. >> + >> +Also for VM_BINDS for long-running gpu_vms the user-mode driver >> should typically >> +select memory fences as out-fences since that gives greater >> flexibility for >> +the kernel mode driver to inject other operations into the bind / >> +unbind operations. Like for example inserting breakpoints into batch >> +buffers. The workload execution can then easily be pipelined behind >> +the bind completion using the memory out-fence as the signal condition >> +for a GPU semaphore embedded by UMD in the workload. >> + >> +Multi-operation VM_BIND IOCTL error handling and interrupts >> +=========================================================== >> + >> +The VM_BIND operations of the IOCTL may error due to lack of resources >> +to complete and also due to interrupted waits. In both situations UMD >> +should preferably restart the IOCTL after taking suitable action. If >> +UMD has over-committed a memory resource, an -ENOSPC error will be >> +returned, and UMD may then unbind resources that are not used at the >> +moment and restart the IOCTL. On -EINTR, UMD should simply restart the >> +IOCTL and on -ENOMEM user-space may either attempt to free known >> +system memory resources or abort the operation. If aborting as a >> +result of a failed operation in a list of operations, some operations >> +may still have completed, and to get back to a known state, user-space >> +should therefore attempt to unbind all virtual memory regions touched >> +by the failing IOCTL. >> +Unbind operations are guaranteed not to cause any errors due to >> +resource constraints. >> +In between a failed VM_BIND IOCTL and a successful restart there may >> +be implementation defined restrictions on the use of the gpu_vm. For a >> +description why, please see KMD implementation details under `error >> +state saving`_. >> + >> +Sample uAPI implementations >> +=========================== >> +Suggested uAPI implementations at the moment of writing can be found >> for >> +the Nouveau driver `here >> +`_. >> >> +and for the Xe driver `here >> +`_. >> >> + >> +KMD implementation details >> +========================== >> + >> +Error state saving >> +__________________ >> +Open: When the VM_BIND IOCTL returns an error, some or even parts of >> +an operation may have been completed. If the IOCTL is restarted, in >> +order to know where to restart, the KMD can either put the gpu_vm in >> +an error state and save one instance of the needed restart state >> +internally. In this case, KMD needs to block further modifications of >> +the gpu_vm state that may cause additional failures requiring a >> +restart state save, until the error has been fully resolved. If the >> +uAPI instead defines a pointer to a UMD allocated cookie in the IOCTL >> +struct, it could also choose to store the restart state in that cookie. >> + >> +The restart state may, for example, be the number of successfully >> +completed operations. >> + >> +Easiest for UMD would of course be if KMD did a full unwind on error >> +so that no error state needs to be saved. >> diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst >> index 2516fe141db6..0f062e1346d2 100644 >> --- a/Documentation/gpu/rfc/xe.rst >> +++ b/Documentation/gpu/rfc/xe.rst >> @@ -138,8 +138,8 @@ memory fences. Ideally with helper support so >> people don't get it wrong in all >>   possible ways. >>     As a key measurable result, the benefits of ASYNC VM_BIND and a >> discussion of >> -various flavors, error handling and a sample API should be >> documented here or in >> -a separate document pointed to by this document. >> +various flavors, error handling and sample API suggestions are >> documented in >> +Documentation/gpu/drm-vm-bind-async.rst >>     Userptr integration and vm_bind >>   ------------------------------- >