Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp4181383rdb; Thu, 14 Sep 2023 14:31:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH9r5UySiaJK09DuJsX8TKxRvWPuiS2rDvoS5ogIZxuYXrim40libZPT+lWu7G24S6fBgWq X-Received: by 2002:a17:90b:3e85:b0:26b:4d4d:bd6 with SMTP id rj5-20020a17090b3e8500b0026b4d4d0bd6mr6089874pjb.33.1694727103801; Thu, 14 Sep 2023 14:31:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694727103; cv=none; d=google.com; s=arc-20160816; b=C+opYDl+5t0y+8vAmepM+n6PN7Or78xCn3lAxROsVL+4SlPW0sqMFK4ZooOhLPPKkj e7CwWw3JVB6X8DdBzaVbW0+hpHm3FjNo+l9aXjxHEUS+2WcWXV7BJabmm7gf1OOyFw6x lcA8cQ1P9rQIGmvh7TkQy9EVDm+HKLDxmQKaMkPWkwGFsagUb4Rk4BliJZbq3MJtnnYA BgVZd7sUXPR1u1s2uh92ZrZ8tT8UqxT1CHPAz8gK49HwOTi55Qz0JGakfa006YXRE73e Jwji3h/fSGqT4+UrYhn/wEKL/qU9mSC3O9NvcUzuiGctpOnrxlpVQf76aSWr6jad998N 90WQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:dkim-signature; bh=0Aol6jxzlWnzdNb3Rodg0NX62ZcMK0/Glt3uyTv0sUs=; fh=NJJ6qiBYM4b5lrF4Xqdy89nucn6k+9tO+RrGkdUiDKI=; b=HgSd13Fr/BXJpSDx43uWhrYwCRF/7rwix4WHDYYrlf7vhl7JfVji3G4iUdgbkArN/6 Rh3aNyc5JCIofuCkV6/GFuBs/f0afEM3L6odHzsdo+EnO6mxm55M8Uhh5LCdjAP2t7Ya KWRDDbShx0HoSV5zonxl5hgS6S3sS/V0qY5FIuve5NeoBgoXq6EVXXyiKNnyZ+DE81yG 2ERjFLQE5IhgJRGY4LC9aPc5OaJxNNUcrkX+F23dsJJxl5VnIhEslfAonFHmTaEUsT/q NVp9yMWBlnVOEF8/Z3ASOYSFbfeI9N5Xz+sxAwhELn6medEKbrxkeqt7IugYMSdY4jtl +PpA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DBh8b44e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id z11-20020a65610b000000b00573f9d84ecbsi2072436pgu.387.2023.09.14.14.31.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 14:31:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DBh8b44e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 09DD48373A00; Thu, 14 Sep 2023 10:27:47 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239096AbjINR1k (ORCPT + 99 others); Thu, 14 Sep 2023 13:27:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239870AbjINR12 (ORCPT ); Thu, 14 Sep 2023 13:27:28 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1271C30F0 for ; Thu, 14 Sep 2023 10:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694712337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0Aol6jxzlWnzdNb3Rodg0NX62ZcMK0/Glt3uyTv0sUs=; b=DBh8b44et54sHdhT0x9ixvf7S1Rd6mvRiDVfb2V7mS2LtcmVtDyehkqIGNNZKhrbXonQkH OuP1bEkA0STWJnQ0aswFSgQ4y8/Pbq8kpGJqoonLxjj1wd4f/HYyRWm9Tf5f0btzfnbWTy s//311HAabSDsNo2xGPCsm0gBNKqy04= Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-563-0HXPP6nDOva5prAJirAhAw-1; Thu, 14 Sep 2023 13:25:35 -0400 X-MC-Unique: 0HXPP6nDOva5prAJirAhAw-1 Received: by mail-ej1-f70.google.com with SMTP id a640c23a62f3a-9a65094d873so94674066b.2 for ; Thu, 14 Sep 2023 10:25:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694712334; x=1695317134; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0Aol6jxzlWnzdNb3Rodg0NX62ZcMK0/Glt3uyTv0sUs=; b=txEmkSCUP/fvmO3PGvzmOMfhJJikeqn7OP45FGhbTFw1KepawysDfCAR4ZJvPhi4qP 5CHinlgiXR1BChPCA2ELyqL7UfB2NEfXjd3S70d+4ly1UIVh0yxS4P1Swh6OR7VZsMxD xdwZTeKmFjjqZbnBaTJ5mbhEHUv3pmI50GIVHP4xQPBDFav84odLoCfvjHI1hl3qNr5q lDE4Wg++RvcJ4E5T3RSiWfYt1nk8r8AMEmRepxLmx4sXHsXhitSyg/O5vUzIKMrWevxs DFeEUkhjDFL1WmkwH4f/MiC1bodRX4oqO+9c3+hsKo0Ybh6TB1OQ65xJi+DbGVFJc9pK Fp1Q== X-Gm-Message-State: AOJu0YyXUWYOjw9fO/lpzRG68k9cu7kPeBRtqSB+EAuZm1Dmi5zj8Gmi PB338jBM5Qdkr8KZgPzviqD4PfuNyGIqeuwwmAd5T+xAcmATGORx+Bv6GYY9MGPWLhEFkGsrhJx hCO/aAYy3s8zCk1KuUgyd5lowBrWvL+ml X-Received: by 2002:a17:906:cc58:b0:9a9:f3df:80cd with SMTP id mm24-20020a170906cc5800b009a9f3df80cdmr4499942ejb.72.1694712333766; Thu, 14 Sep 2023 10:25:33 -0700 (PDT) X-Received: by 2002:a17:906:cc58:b0:9a9:f3df:80cd with SMTP id mm24-20020a170906cc5800b009a9f3df80cdmr4499920ejb.72.1694712333361; Thu, 14 Sep 2023 10:25:33 -0700 (PDT) Received: from ?IPV6:2a02:810d:4b3f:de9c:642:1aff:fe31:a15c? ([2a02:810d:4b3f:de9c:642:1aff:fe31:a15c]) by smtp.gmail.com with ESMTPSA id u22-20020a170906409600b0099d9dee8108sm1290752ejj.149.2023.09.14.10.25.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 14 Sep 2023 10:25:32 -0700 (PDT) Message-ID: <803e4aa0-0c46-05ba-e90b-188771227f0a@redhat.com> Date: Thu, 14 Sep 2023 19:25:31 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation Content-Language: en-US To: =?UTF-8?Q?Thomas_Hellstr=c3=b6m?= , airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith.ekstrand@collabora.com Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org References: <20230909153125.30032-1-dakr@redhat.com> <20230909153125.30032-7-dakr@redhat.com> <62d9b00a-547a-2106-5ec3-6f6a88023496@linux.intel.com> <476c46cfddaef125108a117b47ea9f76299ea85c.camel@linux.intel.com> From: Danilo Krummrich Organization: RedHat In-Reply-To: <476c46cfddaef125108a117b47ea9f76299ea85c.camel@linux.intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Thu, 14 Sep 2023 10:27:47 -0700 (PDT) X-Spam-Status: No, score=-2.3 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email On 9/14/23 19:21, Thomas Hellström wrote: > On Thu, 2023-09-14 at 18:36 +0200, Danilo Krummrich wrote: >> On 9/14/23 15:48, Thomas Hellström wrote: >>> Hi, Danilo >>> >>> Some additional minor comments as xe conversion progresses. >>> >>> On 9/9/23 17:31, Danilo Krummrich wrote: >>>> So far the DRM GPUVA manager offers common infrastructure to >>>> track GPU VA >>>> allocations and mappings, generically connect GPU VA mappings to >>>> their >>>> backing buffers and perform more complex mapping operations on >>>> the GPU VA >>>> space. >>>> >>>> However, there are more design patterns commonly used by drivers, >>>> which >>>> can potentially be generalized in order to make the DRM GPUVA >>>> manager >>>> represent a basic GPU-VM implementation. In this context, this >>>> patch aims >>>> at generalizing the following elements. >>>> >>>> 1) Provide a common dma-resv for GEM objects not being used >>>> outside of >>>>     this GPU-VM. >>>> >>>> 2) Provide tracking of external GEM objects (GEM objects which >>>> are >>>>     shared with other GPU-VMs). >>>> >>>> 3) Provide functions to efficiently lock all GEM objects dma-resv >>>> the >>>>     GPU-VM contains mappings of. >>>> >>>> 4) Provide tracking of evicted GEM objects the GPU-VM contains >>>> mappings >>>>     of, such that validation of evicted GEM objects is >>>> accelerated. >>>> >>>> 5) Provide some convinience functions for common patterns. >>>> >>>> Rather than being designed as a "framework", the target is to >>>> make all >>>> features appear as a collection of optional helper functions, >>>> such that >>>> drivers are free to make use of the DRM GPUVA managers basic >>>> functionality and opt-in for other features without setting any >>>> feature >>>> flags, just by making use of the corresponding functions. >>>> >>>> Big kudos to Boris Brezillon for his help to figure out locking >>>> for drivers >>>> updating the GPU VA space within the fence signalling path. >>>> >>>> Suggested-by: Matthew Brost >>>> Signed-off-by: Danilo Krummrich >>>> --- >>>> >>>> +/** >>>> + * drm_gpuvm_bo_evict() - add / remove a &drm_gem_object to / >>>> from a >>>> + * &drm_gpuvms evicted list >>>> + * @obj: the &drm_gem_object to add or remove >>>> + * @evict: indicates whether the object is evicted >>>> + * >>>> + * Adds a &drm_gem_object to or removes it from all &drm_gpuvms >>>> evicted >>>> + * list containing a mapping of this &drm_gem_object. >>>> + */ >>>> +void >>>> +drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict) >>>> +{ >>>> +    struct drm_gpuvm_bo *vm_bo; >>>> + >>>> +    drm_gem_for_each_gpuvm_bo(vm_bo, obj) { >>>> +        if (evict) >>>> +            drm_gpuvm_bo_list_add(vm_bo, evict); >>>> +        else >>>> +            drm_gpuvm_bo_list_del(vm_bo, evict); >>>> +    } >>>> +} >>>> +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict); >>>> + >>> >>> We need a drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, ...) that >>> puts a single gpuvm_bo on the list, the above function could >>> perhaps be renamed as drm_gpuvm_gem_obj_evict(obj, ....). >> >> Makes sense - gonna change that. >> >>> >>> Reason is some vm's are faulting vms which don't have an evict >>> list, but validate from the pagefault handler. Also evict == false >>> is dangerous because if called from within an exec, it might remove >>> the obj from other vm's evict list before they've had a chance to >>> rebind their VMAs. >>> >>>>   static int >>>>   __drm_gpuva_insert(struct drm_gpuvm *gpuvm, >>>>              struct drm_gpuva *va) >>>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h >>>> index afa50b9059a2..834bb6d6617e 100644 >>>> --- a/include/drm/drm_gpuvm.h >>>> +++ b/include/drm/drm_gpuvm.h >>>> @@ -26,10 +26,12 @@ >>>>    */ >>>>   #include >>>> +#include >>>>   #include >>>>   #include >>>>   #include >>>> +#include >>>>   struct drm_gpuvm; >>>>   struct drm_gpuvm_bo; >>>> @@ -259,6 +261,38 @@ struct drm_gpuvm { >>>>        * space >>>>        */ >>>>       struct dma_resv *resv; >>>> + >>>> +    /** >>>> +     * @extobj: structure holding the extobj list >>>> +     */ >>>> +    struct { >>>> +        /** >>>> +         * @list: &list_head storing &drm_gpuvm_bos serving as >>>> +         * external object >>>> +         */ >>>> +        struct list_head list; >>>> + >>>> +        /** >>>> +         * @lock: spinlock to protect the extobj list >>>> +         */ >>>> +        spinlock_t lock; >>>> +    } extobj; >>>> + >>>> +    /** >>>> +     * @evict: structure holding the evict list and evict list >>>> lock >>>> +     */ >>>> +    struct { >>>> +        /** >>>> +         * @list: &list_head storing &drm_gpuvm_bos currently >>>> being >>>> +         * evicted >>>> +         */ >>>> +        struct list_head list; >>>> + >>>> +        /** >>>> +         * @lock: spinlock to protect the evict list >>>> +         */ >>>> +        spinlock_t lock; >>>> +    } evict; >>>>   }; >>>>   void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device >>>> *drm, >>>> @@ -268,6 +302,21 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, >>>> struct drm_device *drm, >>>>               const struct drm_gpuvm_ops *ops); >>>>   void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm); >>>> +/** >>>> + * drm_gpuvm_is_extobj() - indicates whether the given >>>> &drm_gem_object is an >>>> + * external object >>>> + * @gpuvm: the &drm_gpuvm to check >>>> + * @obj: the &drm_gem_object to check >>>> + * >>>> + * Returns: true if the &drm_gem_object &dma_resv differs from >>>> the >>>> + * &drm_gpuvms &dma_resv, false otherwise >>>> + */ >>>> +static inline bool drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm, >>>> +                       struct drm_gem_object *obj) >>>> +{ >>>> +    return obj && obj->resv != gpuvm->resv; >>>> +} >>>> + >>>>   static inline struct drm_gpuva * >>>>   __drm_gpuva_next(struct drm_gpuva *va) >>>>   { >>>> @@ -346,6 +395,128 @@ __drm_gpuva_next(struct drm_gpuva *va) >>>>   #define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \ >>>>       list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, >>>> rb.entry) >>>> +/** >>>> + * struct drm_gpuvm_exec - &drm_gpuvm abstraction of &drm_exec >>>> + * >>>> + * This structure should be created on the stack as &drm_exec >>>> should be. >>>> + * >>>> + * Optionally, @extra can be set in order to lock additional >>>> &drm_gem_objects. >>>> + */ >>>> +struct drm_gpuvm_exec { >>>> +    /** >>>> +     * @exec: the &drm_exec structure >>>> +     */ >>>> +    struct drm_exec exec; >>>> + >>>> +    /** >>>> +     * @vm: the &drm_gpuvm to lock its DMA reservations >>>> +     */ >>>> +    struct drm_gpuvm *vm; >>>> + >>>> +    /** >>>> +     * @extra: Callback and corresponding private data for the >>>> driver to >>>> +     * lock arbitrary additional &drm_gem_objects. >>>> +     */ >>>> +    struct { >>>> +        /** >>>> +         * @fn: The driver callback to lock additional >>>> &drm_gem_objects. >>>> +         */ >>>> +        int (*fn)(struct drm_gpuvm_exec *vm_exec, >>>> +              unsigned int num_fences); >>>> + >>>> +        /** >>>> +         * @priv: driver private data for the @fn callback >>>> +         */ >>>> +        void *priv; >>>> +    } extra; >>>> +}; >>>> + >>>> +/** >>>> + * drm_gpuvm_prepare_vm() - prepare the GPUVMs common dma-resv >>>> + * @gpuvm: the &drm_gpuvm >>>> + * @exec: the &drm_exec context >>>> + * @num_fences: the amount of &dma_fences to reserve >>>> + * >>>> + * Calls drm_exec_prepare_obj() for the GPUVMs dummy >>>> &drm_gem_object. >>>> + * >>>> + * Using this function directly, it is the drivers >>>> responsibility to call >>>> + * drm_exec_init() and drm_exec_fini() accordingly. >>>> + * >>>> + * Returns: 0 on success, negative error code on failure. >>>> + */ >>>> +static inline int >>>> +drm_gpuvm_prepare_vm(struct drm_gpuvm *gpuvm, >>>> +             struct drm_exec *exec, >>>> +             unsigned int num_fences) >>>> +{ >>>> +    return drm_exec_prepare_obj(exec, &gpuvm->d_obj, >>>> num_fences); >>>> +} >>>> + >>>> +int drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, >>>> +                  struct drm_exec *exec, >>>> +                  unsigned int num_fences); >>>> + >>>> +int drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, >>>> +                struct drm_exec *exec, >>>> +                u64 addr, u64 range, >>>> +                unsigned int num_fences); >>>> + >>>> +int drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec, >>>> +            unsigned int num_fences, >>>> +            bool interruptible); >>>> + >>>> +int drm_gpuvm_exec_lock_array(struct drm_gpuvm_exec *vm_exec, >>>> +                  struct drm_gem_object **objs, >>>> +                  unsigned int num_objs, >>>> +                  unsigned int num_fences, >>>> +                  bool interruptible); >>>> + >>>> +int drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec, >>>> +                  u64 addr, u64 range, >>>> +                  unsigned int num_fences, >>>> +                  bool interruptible); >>>> + >>>> +/** >>>> + * drm_gpuvm_lock() - lock all dma-resv of all assoiciated BOs >>>> + * @gpuvm: the &drm_gpuvm >>>> + * >>>> + * Releases all dma-resv locks of all &drm_gem_objects >>>> previously acquired >>>> + * through drm_gpuvm_lock() or its variants. >>>> + * >>>> + * Returns: 0 on success, negative error code on failure. >>>> + */ >>>> +static inline void >>>> +drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec) >>>> +{ >>>> +    drm_exec_fini(&vm_exec->exec); >>>> +} >>>> + >>>> +int drm_gpuvm_validate(struct drm_gpuvm *gpuvm); >>>> +void drm_gpuvm_resv_add_fence(struct drm_gpuvm *gpuvm, >>>> +                  struct drm_exec *exec, >>>> +                  struct dma_fence *fence, >>>> +                  enum dma_resv_usage private_usage, >>>> +                  enum dma_resv_usage extobj_usage); >>>> + >>>> +/** >>>> + * drm_gpuvm_exec_resv_add_fence() >>>> + * @vm_exec: the &drm_gpuvm_exec abstraction >>>> + * @fence: fence to add >>>> + * @private_usage: private dma-resv usage >>>> + * @extobj_usage: extobj dma-resv usage >>>> + * >>>> + * See drm_gpuvm_resv_add_fence(). >>>> + */ >>>> +static inline void >>>> +drm_gpuvm_exec_resv_add_fence(struct drm_gpuvm_exec *vm_exec, >>>> +                  struct dma_fence *fence, >>>> +                  enum dma_resv_usage private_usage, >>>> +                  enum dma_resv_usage extobj_usage) >>>> +{ >>>> +    drm_gpuvm_resv_add_fence(vm_exec->vm, &vm_exec->exec, fence, >>>> +                 private_usage, extobj_usage); >>>> +} >>>> + >>>>   /** >>>>    * struct drm_gpuvm_bo - structure representing a &drm_gpuvm >>>> and >>>>    * &drm_gem_object combination >>>> @@ -398,6 +569,18 @@ struct drm_gpuvm_bo { >>>>                * gpuva list. >>>>                */ >>>>               struct list_head gem; >>>> + >>>> +            /** >>>> +             * @evict: List entry to attach to the &drm_gpuvms >>>> +             * extobj list. >>>> +             */ >>>> +            struct list_head extobj; >>>> + >>>> +            /** >>>> +             * @evict: List entry to attach to the &drm_gpuvms >>>> evict >>>> +             * list. >>>> +             */ >>>> +            struct list_head evict; >>>>           } entry; >>>>       } list; >>>>   }; >>>> @@ -432,6 +615,9 @@ struct drm_gpuvm_bo * >>>>   drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm, >>>>             struct drm_gem_object *obj); >>>> +void drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict); >>>> +void drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo); >>>> + >>>>   /** >>>>    * drm_gpuvm_bo_for_each_va() - iterator to walk over a list of >>>> &drm_gpuva >>>>    * @va__: &drm_gpuva structure to assign to in each iteration >>>> step >>>> @@ -837,6 +1023,17 @@ struct drm_gpuvm_ops { >>>>        * used. >>>>        */ >>>>       int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv); >>>> + >>>> +    /** >>>> +     * @bo_validate: called from drm_gpuvm_validate() >>>> +     * >>>> +     * Drivers receive this callback for every evicted >>>> &drm_gem_object being >>>> +     * mapped in the corresponding &drm_gpuvm. >>>> +     * >>>> +     * Typically, drivers would call their driver specific >>>> variant of >>>> +     * ttm_bo_validate() from within this callback. >>>> +     */ >>>> +    int (*bo_validate)(struct drm_gem_object *obj); >>> >>> Same here. Could we have a vm_bo as an argument instead, so that >>> the callback knows what gpuvm we're targeting and can mark all its >>> gpu_vas for revalidation? Or is that intended to be done elsewhere? >> >> Makes sense as well. I'll change that too. > > I forgot, drm_gpuvm_validate() would preferably take an drm_gpuvm_exec > argument because we need it in the validate callback. It's also easy > for the driver to subclass further if needed, to pass even more > arguments to its validate callback. Hm.. that implies that a driver open coding the drm_exec loop, still needs to use a struct drm_gpuvm_exec rather than just a struct drm_exec. What is this needed for in Xe? Do we expect other drivers needing it? Might a priv void pointer maybe make more sense? > > /Thomas > > >> >>> >>>>   }; >>>>   int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv, >>> >>> Thanks, >>> >>> Thomas >>> >>> >> >