Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1953403pxk; Tue, 1 Sep 2020 11:43:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwb1KYKxpBfuHNomWQqBmpjtqPwMEsUDc9IU6I6myKoDc5Js9e3/SKnngD89jgzMD8+3PI+ X-Received: by 2002:a17:906:5681:: with SMTP id am1mr2771657ejc.337.1598985782888; Tue, 01 Sep 2020 11:43:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598985782; cv=none; d=google.com; s=arc-20160816; b=vHved7Al9wihEBYQYkdUbHXDK9JsmFJ+XROwXDjpGQ/dCESnuczvaSikzLVxHuyZi8 lKVBwRgYnvC66dTMKq+MxkItn5GrTpxs1dd3XvqKYMBWqofnulw6s1AwRnzz+o1MowOU JgCrcKhkdscUQLDvEe6AgPHp48e3v17/dh89jty3wBrodEKJ6Cl7PqauCOp2Zxbgl4cF Iynjf30KBQAje/uvrKUt5MIGp6w5on81JxKd3cGJ+Tq7PEzjtlRTa3z0udRPan2Vf3Ek cB+QmIMKURjE4fUtlzfo7i9+/Bcu/Y+YXqhQktporv9id7ilYLIcdedOKNEF5Ef9yiHw icnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=nMA/tpc84WKyEe9C5AOi6VqRhuJfeZskF6d71nbMGOE=; b=YhFUkdEydSsCkknha2T5bahzLYMqObr/3w2o17W+TGp6Awi7U1TuFEo3HT2xb25tpR e+05b7qg/8FM2qusk9mG/mSDaDVvGd2M/wU1IYqkVZiwzA2mzRLnOYrjZt6EzlTciaAi 9zQZ2QtdyQPgFSBf7iAmPHgJvi+ugZAPV3TF19fgIqVk/GP0lUSBoKhlmBgNFftMzghH OKnDFRxaUmyiX7h2CMaoXI+8zZvadVN3cA2zNfKQpuhUC3Q7A1gQMOGOvsM7AY4rQGna WqG5BESzriMSICcK/IzUZrYLc7QlvRQsEr2NIpqEfH8R8Fqky9ngm6ZeErBhkhSj6x5a lAXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t11si1143204ejb.154.2020.09.01.11.42.38; Tue, 01 Sep 2020 11:43:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728761AbgIASjk (ORCPT + 99 others); Tue, 1 Sep 2020 14:39:40 -0400 Received: from foss.arm.com ([217.140.110.172]:48136 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726489AbgIASjj (ORCPT ); Tue, 1 Sep 2020 14:39:39 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 662D11FB; Tue, 1 Sep 2020 11:39:38 -0700 (PDT) Received: from [10.57.40.122] (unknown [10.57.40.122]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D9C8F3F71F; Tue, 1 Sep 2020 11:39:34 -0700 (PDT) Subject: Re: [PATCH v9 05/32] drm: etnaviv: fix common struct sg_table related issues To: Marek Szyprowski , dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Cc: Christoph Hellwig , Bartlomiej Zolnierkiewicz , linux-arm-kernel@lists.infradead.org, David Airlie , Daniel Vetter , Lucas Stach , etnaviv@lists.freedesktop.org References: <20200826063316.23486-1-m.szyprowski@samsung.com> <20200826063316.23486-6-m.szyprowski@samsung.com> From: Robin Murphy Message-ID: <57a23432-87f3-c6b3-0623-1ddd3c569e90@arm.com> Date: Tue, 1 Sep 2020 19:39:32 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20200826063316.23486-6-m.szyprowski@samsung.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020-08-26 07:32, Marek Szyprowski wrote: > The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function > returns the number of the created entries in the DMA address space. > However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and > dma_unmap_sg must be called with the original number of the entries > passed to the dma_map_sg(). > > struct sg_table is a common structure used for describing a non-contiguous > memory buffer, used commonly in the DRM and graphics subsystems. It > consists of a scatterlist with memory pages and DMA addresses (sgl entry), > as well as the number of scatterlist entries: CPU pages (orig_nents entry) > and DMA mapped pages (nents entry). > > It turned out that it was a common mistake to misuse nents and orig_nents > entries, calling DMA-mapping functions with a wrong number of entries or > ignoring the number of mapped entries returned by the dma_map_sg() > function. > > To avoid such issues, lets use a common dma-mapping wrappers operating > directly on the struct sg_table objects and use scatterlist page > iterators where possible. This, almost always, hides references to the > nents and orig_nents entries, making the code robust, easier to follow > and copy/paste safe. > > Signed-off-by: Marek Szyprowski > --- > drivers/gpu/drm/etnaviv/etnaviv_gem.c | 12 +++++------- > drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 13 +++---------- > 2 files changed, 8 insertions(+), 17 deletions(-) > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c > index f06e19e7be04..eaf1949bc2e4 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c > @@ -27,7 +27,7 @@ static void etnaviv_gem_scatter_map(struct etnaviv_gem_object *etnaviv_obj) > * because display controller, GPU, etc. are not coherent. > */ > if (etnaviv_obj->flags & ETNA_BO_CACHE_MASK) > - dma_map_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); > + dma_map_sgtable(dev->dev, sgt, DMA_BIDIRECTIONAL, 0); > } > > static void etnaviv_gem_scatterlist_unmap(struct etnaviv_gem_object *etnaviv_obj) > @@ -51,7 +51,7 @@ static void etnaviv_gem_scatterlist_unmap(struct etnaviv_gem_object *etnaviv_obj > * discard those writes. > */ > if (etnaviv_obj->flags & ETNA_BO_CACHE_MASK) > - dma_unmap_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); > + dma_unmap_sgtable(dev->dev, sgt, DMA_BIDIRECTIONAL, 0); > } > > /* called with etnaviv_obj->lock held */ > @@ -404,9 +404,8 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op, > } > > if (etnaviv_obj->flags & ETNA_BO_CACHED) { > - dma_sync_sg_for_cpu(dev->dev, etnaviv_obj->sgt->sgl, > - etnaviv_obj->sgt->nents, > - etnaviv_op_to_dma_dir(op)); > + dma_sync_sgtable_for_cpu(dev->dev, etnaviv_obj->sgt, > + etnaviv_op_to_dma_dir(op)); > etnaviv_obj->last_cpu_prep_op = op; > } > > @@ -421,8 +420,7 @@ int etnaviv_gem_cpu_fini(struct drm_gem_object *obj) > if (etnaviv_obj->flags & ETNA_BO_CACHED) { > /* fini without a prep is almost certainly a userspace error */ > WARN_ON(etnaviv_obj->last_cpu_prep_op == 0); > - dma_sync_sg_for_device(dev->dev, etnaviv_obj->sgt->sgl, > - etnaviv_obj->sgt->nents, > + dma_sync_sgtable_for_device(dev->dev, etnaviv_obj->sgt, > etnaviv_op_to_dma_dir(etnaviv_obj->last_cpu_prep_op)); > etnaviv_obj->last_cpu_prep_op = 0; > } > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c > index 3607d348c298..13b100553a0b 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c > +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c > @@ -79,7 +79,7 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova, > if (!context || !sgt) > return -EINVAL; > > - for_each_sg(sgt->sgl, sg, sgt->nents, i) { > + for_each_sgtable_dma_sg(sgt, sg, i) { > u32 pa = sg_dma_address(sg) - sg->offset; > size_t bytes = sg_dma_len(sg) + sg->offset; > > @@ -95,14 +95,7 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova, > return 0; > > fail: > - da = iova; > - > - for_each_sg(sgt->sgl, sg, i, j) { > - size_t bytes = sg_dma_len(sg) + sg->offset; > - > - etnaviv_context_unmap(context, da, bytes); > - da += bytes; > - } > + etnaviv_context_unmap(context, iova, da - iova); I had to take a closer look to figure this out, but AFAICS it does indeed work out as a simpler way of achieving the exact same result, and in fact neatly mirrors how etnaviv_context_map() itself cleans up. Reviewed-by: Robin Murphy > return ret; > } > > @@ -113,7 +106,7 @@ static void etnaviv_iommu_unmap(struct etnaviv_iommu_context *context, u32 iova, > unsigned int da = iova; > int i; > > - for_each_sg(sgt->sgl, sg, sgt->nents, i) { > + for_each_sgtable_dma_sg(sgt, sg, i) { > size_t bytes = sg_dma_len(sg) + sg->offset; > > etnaviv_context_unmap(context, da, bytes); >