Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp4235583rdb; Thu, 14 Sep 2023 16:40:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEvj5H6DSqH0fMylwtT47egKKHv0Y8KBri0tKyOxz87zmCSFM7izltB8k++dsxGw2erXV24 X-Received: by 2002:a17:90a:6c45:b0:274:7fbd:9bc0 with SMTP id x63-20020a17090a6c4500b002747fbd9bc0mr12625pjj.7.1694734857302; Thu, 14 Sep 2023 16:40:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694734857; cv=none; d=google.com; s=arc-20160816; b=hGNzZC7d7tsHhaVra+F6LdMQDtG4lY3qGz0uswsxyp/8oAf+WR7ybnBtOWRebWGSXS wtjIZv2H0d9b/XxOU1ADfe9gBgRH+/ahjqqI1Y6G2RB01QtpsmppJo7jXn4/T2btrZl+ njRP3NPSUgDW0D0jq8zYV7lwJ7LWwoL/l6uVVlREVXvARiWLAdWq/2IwGTVGDqcoLJuJ lya6hbbBQe2VdwbYPqN/DHq+3/nPV+4yB6ceJrO2CzctWQNqJL+174TlGQvRynwTdap+ Zb+CvX86Awz+diYq2ARPsdtdI2XlmMxrNnaD3753yHaB6F+3MWkdRFiojpYErBQut9ns kSCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NvdP4PCG4iT4XNNW95nm200ljx41hXifMoe1sK9tb1s=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=AqfF/Zfvf7BSBo/x0FcOyGjoIdVQHSW+VGxDpiWMXT7J6ibgUASeViOcvkmOHXhLkr sRjng+Sp9DV0amsTuWSKPoD+MByLQp9LVO32bH9wAI4/4h+Tp2P6lrAxIuGjBjrj871E U/dsbzuMnea07Om3uL9FxpPicQKDtftgILvRI3QToy/0nw6itNzhBiaPcBPzi0SxDN8I mXjwn3IH5ZCx9E9Uzd3LlxJ94CuvCDPjGfPx3Oi1IBGG+yqwxFYVAila2DJkEmirm/yA LedpqmJI7YNMrCA+14DWW+rOateH6Mo6daFpFEjkrzB9lOxY7oJCdilPE5H3YW1pZtsa vjOw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=D27jYGqe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Return-Path: Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id r18-20020a17090aa09200b0025bec4468c2si2346787pjp.167.2023.09.14.16.40.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 16:40:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=D27jYGqe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 5E7E0834A99F; Thu, 14 Sep 2023 16:30:18 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230302AbjINXaB (ORCPT + 99 others); Thu, 14 Sep 2023 19:30:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230324AbjINX3e (ORCPT ); Thu, 14 Sep 2023 19:29:34 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B42A13583 for ; Thu, 14 Sep 2023 16:29:22 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 52C3B6607373; Fri, 15 Sep 2023 00:29:20 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1694734161; bh=s6JfX7dJN5TjtLMmHJD7+b6OIsh4eDcUnX2thR5rLAU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D27jYGqedEkpVw3r/kcvMvZMmZrv6uHmjX24K9UZUinAIbiuv9A4CGMXCoIWGqkqs 4MWkiab9Ony2YTO95UTeWqDKfHRorMK+aGAI4T9y93F7M9FPKh+kz37EyryB4GovsI QINPEc6yQ7RW8L5BRG91/I91JOK41vBBKvJulut6vs/vI5V8pch9bFSt/B3hAIiKAj kSZWa2mnbKfOqK5jo0H425MVLK9fMA444aEOVYAkuA+FHFBj01FEarK2fjVqdkf0gd 8nu7rH0UqnV5Oh8TAMOBHn3VKRe5u3naycdryopApWFkhOiNXWKcTavpvqIq5NGnjf /FDYyaGeNTewQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v17 16/18] drm/virtio: Attach shmem BOs dynamically Date: Fri, 15 Sep 2023 02:27:19 +0300 Message-ID: <20230914232721.408581-17-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230914232721.408581-1-dmitry.osipenko@collabora.com> References: <20230914232721.408581-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Thu, 14 Sep 2023 16:30:18 -0700 (PDT) X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Prepare for addition of memory shrinker support by attaching shmem pages to host dynamically on first use. The attachment vq command wasn't fenced and there was no vq kick made in the BO creation code path, hence the the attachment already was happening dynamically, but implicitly. Making attachment explicitly dynamic will allow to simplify and reuse more code when shrinker will be added. The virtio_gpu_object_shmem_init() now works under held reservation lock, which will be important to have for shrinker. Acked-by: Gerd Hoffmann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 7 +++ drivers/gpu/drm/virtio/virtgpu_gem.c | 26 ++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 32 ++++++---- drivers/gpu/drm/virtio/virtgpu_object.c | 80 ++++++++++++++++++++----- drivers/gpu/drm/virtio/virtgpu_submit.c | 15 ++++- 5 files changed, 132 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 5a4b74b7b318..8c82530eae82 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -89,6 +89,7 @@ struct virtio_gpu_object { uint32_t hw_res_handle; bool dumb; bool created; + bool detached; bool host3d_blob, guest_blob; uint32_t blob_mem, blob_flags; @@ -313,6 +314,8 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); @@ -458,6 +461,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo); +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo); + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo); + int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev, uint32_t *resid); /* virtgpu_prime.c */ diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 625c05d625bf..97e67064c97e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -295,6 +295,26 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) spin_unlock(&vgdev->obj_free_lock); } +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct virtio_gpu_object *bo; + int ret = 0; + u32 i; + + for (i = 0; i < objs->nents; i++) { + bo = gem_to_virtio_gpu_obj(objs->objs[i]); + + if (virtio_gpu_is_shmem(bo) && bo->detached) { + ret = virtio_gpu_reattach_shmem_object_locked(bo); + if (ret) + break; + } + } + + return ret; +} + int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) { int err; @@ -303,6 +323,12 @@ int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) err = drm_gem_shmem_pin(&bo->base); if (err) return err; + + err = virtio_gpu_reattach_shmem_object(bo); + if (err) { + drm_gem_shmem_unpin(&bo->base); + return err; + } } return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index b24b11f25197..070c29cea26a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -246,6 +246,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); if (!fence) { ret = -ENOMEM; @@ -288,11 +292,25 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, goto err_put_free; } + ret = virtio_gpu_array_lock_resv(objs); + if (ret != 0) + goto err_put_free; + + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) { + ret = -ENOMEM; + goto err_unlock; + } + if (!vgdev->has_virgl_3d) { virtio_gpu_cmd_transfer_to_host_2d (vgdev, offset, args->box.w, args->box.h, args->box.x, args->box.y, - objs, NULL); + objs, fence); } else { virtio_gpu_create_context(dev, file); @@ -301,23 +319,13 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, goto err_put_free; } - ret = virtio_gpu_array_lock_resv(objs); - if (ret != 0) - goto err_put_free; - - ret = -ENOMEM; - fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, - 0); - if (!fence) - goto err_unlock; - virtio_gpu_cmd_transfer_to_host_3d (vgdev, vfpriv ? vfpriv->ctx_id : 0, offset, args->level, args->stride, args->layer_stride, &args->box, objs, fence); - dma_fence_put(&fence->f); } + dma_fence_put(&fence->f); virtio_gpu_notify(vgdev); return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index ee5d2a70656b..77590d66a56d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -142,10 +142,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, struct sg_table *pages; int si; - pages = drm_gem_shmem_get_pages_sgt(&bo->base); + pages = drm_gem_shmem_get_pages_sgt_locked(&bo->base); if (IS_ERR(pages)) return PTR_ERR(pages); + if (!ents) + return 0; + if (use_dma_api) *nents = pages->nents; else @@ -176,6 +179,40 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return 0; } +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_mem_entry *ents; + unsigned int nents; + int err; + + if (!bo->detached) + return 0; + + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (err) + return err; + + virtio_gpu_object_attach(vgdev, bo, ents, nents); + + bo->detached = false; + + return 0; +} + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo) +{ + int ret; + + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + return ret; + ret = virtio_gpu_reattach_shmem_object_locked(bo); + dma_resv_unlock(bo->base.base.resv); + + return ret; +} + int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, @@ -202,45 +239,60 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bo->dumb = params->dumb; - ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); - if (ret != 0) - goto err_put_id; + if (bo->blob_mem == VIRTGPU_BLOB_MEM_GUEST) + bo->guest_blob = true; if (fence) { ret = -ENOMEM; objs = virtio_gpu_array_alloc(1); if (!objs) - goto err_free_entry; + goto err_put_id; virtio_gpu_array_add_obj(objs, &bo->base.base); ret = virtio_gpu_array_lock_resv(objs); if (ret != 0) goto err_put_objs; + } else { + ret = dma_resv_lock(bo->base.base.resv, NULL); + if (ret) + goto err_put_id; } if (params->blob) { - if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST) - bo->guest_blob = true; + ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (ret) + goto err_unlock_objs; + } else { + ret = virtio_gpu_object_shmem_init(vgdev, bo, NULL, NULL); + if (ret) + goto err_unlock_objs; + bo->detached = true; + } + + if (params->blob) virtio_gpu_cmd_resource_create_blob(vgdev, bo, params, ents, nents); - } else if (params->virgl) { + else if (params->virgl) virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } else { + else virtio_gpu_cmd_create_resource(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } + + if (!fence) + dma_resv_unlock(bo->base.base.resv); *bo_ptr = bo; return 0; +err_unlock_objs: + if (fence) + virtio_gpu_array_unlock_resv(objs); + else + dma_resv_unlock(bo->base.base.resv); err_put_objs: virtio_gpu_array_put_free(objs); -err_free_entry: - kvfree(ents); err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); err_free_gem: diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c index 3c00135ead45..94867f485a64 100644 --- a/drivers/gpu/drm/virtio/virtgpu_submit.c +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c @@ -465,8 +465,19 @@ static void virtio_gpu_install_out_fence_fd(struct virtio_gpu_submit *submit) static int virtio_gpu_lock_buflist(struct virtio_gpu_submit *submit) { - if (submit->buflist) - return virtio_gpu_array_lock_resv(submit->buflist); + int err; + + if (submit->buflist) { + err = virtio_gpu_array_lock_resv(submit->buflist); + if (err) + return err; + + err = virtio_gpu_array_prepare(submit->vgdev, submit->buflist); + if (err) { + virtio_gpu_array_unlock_resv(submit->buflist); + return err; + } + } return 0; } -- 2.41.0