Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp950599pxp; Wed, 16 Mar 2022 22:23:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz7cX8pJrE57PJhMJzvJVV1fbgE9wA7aTvlNezK1GiH79xKBL1pemUpIkVkn0qjavu/XdZq X-Received: by 2002:a17:90b:2242:b0:1c6:80e3:71b6 with SMTP id hk2-20020a17090b224200b001c680e371b6mr1847141pjb.152.1647494594205; Wed, 16 Mar 2022 22:23:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1647494594; cv=none; d=google.com; s=arc-20160816; b=HgUP7Qjplpfol8pG0R/Q0u1vJiTP8eVvY4cP7yWqLcuhkiPkKPcxEMpnuJ5D2qW8bv +LVSWN2UIkJTMzuIvvlVO9eo9csZEq9iTIjxUv3aCDd/YckSpKqRG8Qp4AABTnvbzXFz Y7qbPLY5yYo4t3YbKA6yuNmf7wwWKG9f7NT5f5R51NO/EkgZeWHHKZDYKbLIQgmva5Hx f3l7eVsv1Vb/PBavUebfMUKidQjpXwVOZbp9kIFTmd11JiVTgkw4SBkdHAIWXqo/RWHE yWHyLUvQzdK08AAJKqm0Wo5zbcmLZFz+2nQHInTO2Rx0RKGM80AoUSSuJupUJFdfC5sP B+rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=IAdmquA5zOBO8MVUklEF5XrAASK2BlQQtBK39HnFZmU=; b=g+LhVvR55o1ZwIdYli1FJjNeZwkPZn7DK++ZQDzFETd3K8TaQRjjCsBSZuxaov56Uc MaCjeugIu0TGvzTR/2jhVpZfkaL1BsyNUjNySAYoW/QJTOya+vgaoXlD/XySpzoZt72Y Y2nrVQ43xfXhrcfxAdKXf9HurkHuaoDTiB6pDWI1A1G2hrqVKlJ7VWVDo86auafxRUKS RWZiyzOXbnqrWB/dCd4WaY9UTbUzUL49GfJnt6OFfz0XEJBD8Q5drhnTGsSW4QIFAKFm gd9txE2Ag0p1AjLJ8FMHZggNlF2GohHrg8cWjUpwPWcGWluRMkq8YomhiYsljCZXC5Zc O3pQ== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id g34-20020a635222000000b0038167132443si1069812pgb.458.2022.03.16.22.23.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Mar 2022 22:23:14 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CE94F223236; Wed, 16 Mar 2022 21:29:09 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355820AbiCPMnP (ORCPT + 99 others); Wed, 16 Mar 2022 08:43:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355804AbiCPMnJ (ORCPT ); Wed, 16 Mar 2022 08:43:09 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4D0A365D27 for ; Wed, 16 Mar 2022 05:41:55 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B57231476; Wed, 16 Mar 2022 05:41:54 -0700 (PDT) Received: from [10.57.42.204] (unknown [10.57.42.204]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 046133F85F; Wed, 16 Mar 2022 05:41:50 -0700 (PDT) Message-ID: Date: Wed, 16 Mar 2022 12:41:46 +0000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 Subject: Re: [PATCH v2 4/8] drm/virtio: Improve DMA API usage for shmem BOs Content-Language: en-GB To: Dmitry Osipenko , David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko References: <20220314224253.236359-1-dmitry.osipenko@collabora.com> <20220314224253.236359-5-dmitry.osipenko@collabora.com> From: Robin Murphy In-Reply-To: <20220314224253.236359-5-dmitry.osipenko@collabora.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022-03-14 22:42, Dmitry Osipenko wrote: > DRM API requires the DRM's driver to be backed with the device that can > be used for generic DMA operations. The VirtIO-GPU device can't perform > DMA operations if it uses PCI transport because PCI device driver creates > a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's > GPU device for the DRM's device instead of the VirtIO-GPU device and drop > DMA-related hacks from the VirtIO-GPU driver. > > Signed-off-by: Dmitry Osipenko > --- > drivers/gpu/drm/virtio/virtgpu_drv.c | 22 +++++++--- > drivers/gpu/drm/virtio/virtgpu_drv.h | 5 +-- > drivers/gpu/drm/virtio/virtgpu_kms.c | 7 ++-- > drivers/gpu/drm/virtio/virtgpu_object.c | 56 +++++-------------------- > drivers/gpu/drm/virtio/virtgpu_vq.c | 13 +++--- > 5 files changed, 37 insertions(+), 66 deletions(-) > > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c > index 5f25a8d15464..8449dad3e65c 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_drv.c > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c > @@ -46,9 +46,9 @@ static int virtio_gpu_modeset = -1; > MODULE_PARM_DESC(modeset, "Disable/Enable modesetting"); > module_param_named(modeset, virtio_gpu_modeset, int, 0400); > > -static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev) > +static int virtio_gpu_pci_quirk(struct drm_device *dev) > { > - struct pci_dev *pdev = to_pci_dev(vdev->dev.parent); > + struct pci_dev *pdev = to_pci_dev(dev->dev); > const char *pname = dev_name(&pdev->dev); > bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; > char unique[20]; > @@ -101,6 +101,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd > static int virtio_gpu_probe(struct virtio_device *vdev) > { > struct drm_device *dev; > + struct device *dma_dev; > int ret; > > if (drm_firmware_drivers_only() && virtio_gpu_modeset == -1) > @@ -109,18 +110,29 @@ static int virtio_gpu_probe(struct virtio_device *vdev) > if (virtio_gpu_modeset == 0) > return -EINVAL; > > - dev = drm_dev_alloc(&driver, &vdev->dev); > + /* > + * If GPU's parent is a PCI device, then we will use this PCI device > + * for the DRM's driver device because GPU won't have PCI's IOMMU DMA > + * ops in this case since GPU device is sitting on a separate (from PCI) > + * virtio-bus. > + */ > + if (!strcmp(vdev->dev.parent->bus->name, "pci")) Nit: dev_is_pci() ? However, what about other VirtIO transports? Wouldn't virtio-mmio with F_ACCESS_PLATFORM be in a similar situation? Robin. > + dma_dev = vdev->dev.parent; > + else > + dma_dev = &vdev->dev; > + > + dev = drm_dev_alloc(&driver, dma_dev); > if (IS_ERR(dev)) > return PTR_ERR(dev); > vdev->priv = dev; > > if (!strcmp(vdev->dev.parent->bus->name, "pci")) { > - ret = virtio_gpu_pci_quirk(dev, vdev); > + ret = virtio_gpu_pci_quirk(dev); > if (ret) > goto err_free; > } > > - ret = virtio_gpu_init(dev); > + ret = virtio_gpu_init(vdev, dev); > if (ret) > goto err_free; > > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h > index 0a194aaad419..b2d93cb12ebf 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h > @@ -100,8 +100,6 @@ struct virtio_gpu_object { > > struct virtio_gpu_object_shmem { > struct virtio_gpu_object base; > - struct sg_table *pages; > - uint32_t mapped; > }; > > struct virtio_gpu_object_vram { > @@ -214,7 +212,6 @@ struct virtio_gpu_drv_cap_cache { > }; > > struct virtio_gpu_device { > - struct device *dev; > struct drm_device *ddev; > > struct virtio_device *vdev; > @@ -282,7 +279,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; > void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); > > /* virtgpu_kms.c */ > -int virtio_gpu_init(struct drm_device *dev); > +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev); > void virtio_gpu_deinit(struct drm_device *dev); > void virtio_gpu_release(struct drm_device *dev); > int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file); > diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c > index 3313b92db531..0d1e3eb61bee 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_kms.c > +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c > @@ -110,7 +110,7 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev, > vgdev->num_capsets = num_capsets; > } > > -int virtio_gpu_init(struct drm_device *dev) > +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) > { > static vq_callback_t *callbacks[] = { > virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack > @@ -123,7 +123,7 @@ int virtio_gpu_init(struct drm_device *dev) > u32 num_scanouts, num_capsets; > int ret = 0; > > - if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1)) > + if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) > return -ENODEV; > > vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL); > @@ -132,8 +132,7 @@ int virtio_gpu_init(struct drm_device *dev) > > vgdev->ddev = dev; > dev->dev_private = vgdev; > - vgdev->vdev = dev_to_virtio(dev->dev); > - vgdev->dev = dev->dev; > + vgdev->vdev = vdev; > > spin_lock_init(&vgdev->display_info_lock); > spin_lock_init(&vgdev->resource_export_lock); > diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c > index 0b8cbb87f8d8..1964c0d8b51f 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_object.c > +++ b/drivers/gpu/drm/virtio/virtgpu_object.c > @@ -67,21 +67,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) > > virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); > if (virtio_gpu_is_shmem(bo)) { > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > - > - if (shmem->pages) { > - if (shmem->mapped) { > - dma_unmap_sgtable(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE, 0); > - shmem->mapped = 0; > - } > - > - sg_free_table(shmem->pages); > - kfree(shmem->pages); > - shmem->pages = NULL; > - drm_gem_shmem_unpin(&bo->base); > - } > - > drm_gem_shmem_free(&bo->base); > } else if (virtio_gpu_is_vram(bo)) { > struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); > @@ -153,37 +138,18 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, > unsigned int *nents) > { > bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > struct scatterlist *sg; > - int si, ret; > + struct sg_table *pages; > + int si; > > - ret = drm_gem_shmem_pin(&bo->base); > - if (ret < 0) > - return -EINVAL; > - > - /* > - * virtio_gpu uses drm_gem_shmem_get_sg_table instead of > - * drm_gem_shmem_get_pages_sgt because virtio has it's own set of > - * dma-ops. This is discouraged for other drivers, but should be fine > - * since virtio_gpu doesn't support dma-buf import from other devices. > - */ > - shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); > - ret = PTR_ERR(shmem->pages); > - if (ret) { > - drm_gem_shmem_unpin(&bo->base); > - shmem->pages = NULL; > - return ret; > - } > + pages = drm_gem_shmem_get_pages_sgt(&bo->base); > + if (IS_ERR(pages)) > + return PTR_ERR(pages); > > - if (use_dma_api) { > - ret = dma_map_sgtable(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE, 0); > - if (ret) > - return ret; > - *nents = shmem->mapped = shmem->pages->nents; > - } else { > - *nents = shmem->pages->orig_nents; > - } > + if (use_dma_api) > + *nents = pages->nents; > + else > + *nents = pages->orig_nents; > > *ents = kvmalloc_array(*nents, > sizeof(struct virtio_gpu_mem_entry), > @@ -194,13 +160,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, > } > > if (use_dma_api) { > - for_each_sgtable_dma_sg(shmem->pages, sg, si) { > + for_each_sgtable_dma_sg(pages, sg, si) { > (*ents)[si].addr = cpu_to_le64(sg_dma_address(sg)); > (*ents)[si].length = cpu_to_le32(sg_dma_len(sg)); > (*ents)[si].padding = 0; > } > } else { > - for_each_sgtable_sg(shmem->pages, sg, si) { > + for_each_sgtable_sg(pages, sg, si) { > (*ents)[si].addr = cpu_to_le64(sg_phys(sg)); > (*ents)[si].length = cpu_to_le32(sg->length); > (*ents)[si].padding = 0; > diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c > index 2edf31806b74..06566e44307d 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_vq.c > +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c > @@ -593,11 +593,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, > struct virtio_gpu_transfer_to_host_2d *cmd_p; > struct virtio_gpu_vbuffer *vbuf; > bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > > if (virtio_gpu_is_shmem(bo) && use_dma_api) > - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE); > + dma_sync_sgtable_for_device(&vgdev->vdev->dev, > + bo->base.sgt, DMA_TO_DEVICE); > > cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); > memset(cmd_p, 0, sizeof(*cmd_p)); > @@ -1017,11 +1016,9 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, > struct virtio_gpu_vbuffer *vbuf; > bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); > > - if (virtio_gpu_is_shmem(bo) && use_dma_api) { > - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, > - shmem->pages, DMA_TO_DEVICE); > - } > + if (virtio_gpu_is_shmem(bo) && use_dma_api) > + dma_sync_sgtable_for_device(&vgdev->vdev->dev, > + bo->base.sgt, DMA_TO_DEVICE); > > cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); > memset(cmd_p, 0, sizeof(*cmd_p));