Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp1023917iof; Mon, 6 Jun 2022 18:24:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6L+VulJL41OHDO5O38WmBM340d+f+BQbzaPNo6GW8cpktALp0cL4n/DacjzFUPHBYyjDx X-Received: by 2002:a17:906:3e96:b0:711:5a8:5081 with SMTP id a22-20020a1709063e9600b0071105a85081mr13353016ejj.703.1654565065656; Mon, 06 Jun 2022 18:24:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654565065; cv=none; d=google.com; s=arc-20160816; b=TdRk34nx6gaJWsCXRlJoASiVzEzPLrzBxPjC9Wf79tOGVWWoRQCbOQZSsEAn06ajvN +7HxMCYhoQCVFoc2Os8kA9f8cWDiB+NYGmFGU5S2kl87EwmRyiUzCYoTMhAB0pPMbUnb 3Pgb6DmMHmQikp1cWx6QwAj1SyAAfe1IdzRJ+4il7vHJFJYs2taxdETQj+17Gt5gImP3 1fT4UxjEZrn4pzn52F51vNyIvuBHXAsstttA/1ubIO9r6uLDoggjFuGr2+3N9CZ3uUoi EkLOQY0gCDdi+s+ZjTJmELEYhYCASylUCG23vDTM+6s9xE2eNQhB+CxrvTGd5tJYXnEP FSfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=GaRVCbbg4/cOrvNi66I48Ix8gx43Fp1GmLVBg43TxWI=; b=koQKHEuFSyyTbFCWDZfYxXuaxVrsLc8EFRVMiLLqqkzmrt7/v+3PWM3YDrcBBiQWee Z3LD0OnPaOFcXzRk0wiAWKDpJx7MnB9yzAzT3p/zVg3itRkHTeHsOCe0g4vZESyoXLTc R/0zjqhQNVcMmXyG89cfwp3ugujON/rUWPTz/YC0xw7Mpl7rzTCBt828VKfI9QGz/+oY LWH6bUdrWnaX0VSkhETa3/2nXZwoGV8/+oJBZDIFBHSQ3uDZ3dnX4PiNb1w1TzJiZ8/6 XFdy27oVHrwLeOLe8E+a2uks2ONGjfCqiSVW/ZyYe5EY49J+SE79j7Zna4GaP0/+u1C9 bwsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dm10-20020a170907948a00b006e89250c588si312507ejc.12.2022.06.06.18.24.00; Mon, 06 Jun 2022 18:24:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233544AbiFFUXy (ORCPT + 99 others); Mon, 6 Jun 2022 16:23:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233767AbiFFUXn (ORCPT ); Mon, 6 Jun 2022 16:23:43 -0400 X-Greylist: delayed 1274 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Mon, 06 Jun 2022 13:22:58 PDT Received: from neon-v2.ccupm.upm.es (neon-v2.ccupm.upm.es [138.100.198.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40FE34BBA0; Mon, 6 Jun 2022 13:22:57 -0700 (PDT) Received: from localhost (82-69-11-11.dsl.in-addr.zen.co.uk [82.69.11.11]) (user=adrianml@alumnos.upm.es mech=PLAIN bits=0) by neon-v2.ccupm.upm.es (8.15.2/8.15.2/neon-v2-001) with ESMTPSA id 256JsxWg016779 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 6 Jun 2022 19:55:00 GMT Date: Mon, 6 Jun 2022 20:54:55 +0100 From: Adrian Larumbe To: Vinod Koul Cc: Christoph Hellwig , michal.simek@xilinx.com, dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH] dmaengine: remove DMA_MEMCPY_SG once again Message-ID: <20220606195455.qmq3yu6mc6g4rmm2@sobremesa> References: <20220606074733.622616-1-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-BitDefender-Scanner: Clean, Agent: BitDefender Milter 3.1.7 on neon-v2.ccupm.upm.es, sigver: 7.92076 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >On 06.06.2022 23:23, Vinod Koul wrote: >On 06-06-22, 09:47, Christoph Hellwig wrote: >> This was removed before due to the complete lack of users, but >> 3218910fd585 ("dmaengine: Add core function and capability check for >> DMA_MEMCPY_SG") and 29cf37fa6dd9 ("dmaengine: Add consumer for the new >> DMA_MEMCPY_SG API function.") added it back despite still not having >> any users whatsoever. >> >> Fixes: 3218910fd585 ("dmaengine: Add core function and capability check for DMA_MEMCPY_SG") >> Fixes: 29cf37fa6dd9 ("dmaengine: Add consumer for the new DMA_MEMCPY_SG API function.") > >This is consumer of the driver API and it was bought back with the >premise that user will also come... It's commit 29cf37fa6dd9 ("dmaengine: Add consumer for the new DMA_MEMCPY_SG API function.") The two previous commits add the new driver API callback and document it. >Adrianm, Michal any reason why user is not mainline yet..? Just double checked the mainline, and all three commits are there. >> Signed-off-by: Christoph Hellwig >> --- >> .../driver-api/dmaengine/provider.rst | 10 -- >> drivers/dma/dmaengine.c | 7 - >> drivers/dma/xilinx/xilinx_dma.c | 122 ------------------ >> include/linux/dmaengine.h | 20 --- >> 4 files changed, 159 deletions(-) >> >> diff --git a/Documentation/driver-api/dmaengine/provider.rst b/Documentation/driver-api/dmaengine/provider.rst >> index 1e0f1f85d10e5..ceac2a300e328 100644 >> --- a/Documentation/driver-api/dmaengine/provider.rst >> +++ b/Documentation/driver-api/dmaengine/provider.rst >> @@ -162,16 +162,6 @@ Currently, the types available are: >> >> - The device is able to do memory to memory copies >> >> -- - DMA_MEMCPY_SG >> - >> - - The device supports memory to memory scatter-gather transfers. >> - >> - - Even though a plain memcpy can look like a particular case of a >> - scatter-gather transfer, with a single chunk to copy, it's a distinct >> - transaction type in the mem2mem transfer case. This is because some very >> - simple devices might be able to do contiguous single-chunk memory copies, >> - but have no support for more complex SG transfers. >> - >> - No matter what the overall size of the combined chunks for source and >> destination is, only as many bytes as the smallest of the two will be >> transmitted. That means the number and size of the scatter-gather buffers in >> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c >> index e80feeea0e018..c741b6431958c 100644 >> --- a/drivers/dma/dmaengine.c >> +++ b/drivers/dma/dmaengine.c >> @@ -1153,13 +1153,6 @@ int dma_async_device_register(struct dma_device *device) >> return -EIO; >> } >> >> - if (dma_has_cap(DMA_MEMCPY_SG, device->cap_mask) && !device->device_prep_dma_memcpy_sg) { >> - dev_err(device->dev, >> - "Device claims capability %s, but op is not defined\n", >> - "DMA_MEMCPY_SG"); >> - return -EIO; >> - } >> - >> if (dma_has_cap(DMA_XOR, device->cap_mask) && !device->device_prep_dma_xor) { >> dev_err(device->dev, >> "Device claims capability %s, but op is not defined\n", >> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c >> index cd62bbb50e8b4..6276934d4d2be 100644 >> --- a/drivers/dma/xilinx/xilinx_dma.c >> +++ b/drivers/dma/xilinx/xilinx_dma.c >> @@ -2127,126 +2127,6 @@ xilinx_cdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst, >> return NULL; >> } >> >> -/** >> - * xilinx_cdma_prep_memcpy_sg - prepare descriptors for a memcpy_sg transaction >> - * @dchan: DMA channel >> - * @dst_sg: Destination scatter list >> - * @dst_sg_len: Number of entries in destination scatter list >> - * @src_sg: Source scatter list >> - * @src_sg_len: Number of entries in source scatter list >> - * @flags: transfer ack flags >> - * >> - * Return: Async transaction descriptor on success and NULL on failure >> - */ >> -static struct dma_async_tx_descriptor *xilinx_cdma_prep_memcpy_sg( >> - struct dma_chan *dchan, struct scatterlist *dst_sg, >> - unsigned int dst_sg_len, struct scatterlist *src_sg, >> - unsigned int src_sg_len, unsigned long flags) >> -{ >> - struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); >> - struct xilinx_dma_tx_descriptor *desc; >> - struct xilinx_cdma_tx_segment *segment, *prev = NULL; >> - struct xilinx_cdma_desc_hw *hw; >> - size_t len, dst_avail, src_avail; >> - dma_addr_t dma_dst, dma_src; >> - >> - if (unlikely(dst_sg_len == 0 || src_sg_len == 0)) >> - return NULL; >> - >> - if (unlikely(!dst_sg || !src_sg)) >> - return NULL; >> - >> - desc = xilinx_dma_alloc_tx_descriptor(chan); >> - if (!desc) >> - return NULL; >> - >> - dma_async_tx_descriptor_init(&desc->async_tx, &chan->common); >> - desc->async_tx.tx_submit = xilinx_dma_tx_submit; >> - >> - dst_avail = sg_dma_len(dst_sg); >> - src_avail = sg_dma_len(src_sg); >> - /* >> - * loop until there is either no more source or no more destination >> - * scatterlist entry >> - */ >> - while (true) { >> - len = min_t(size_t, src_avail, dst_avail); >> - len = min_t(size_t, len, chan->xdev->max_buffer_len); >> - if (len == 0) >> - goto fetch; >> - >> - /* Allocate the link descriptor from DMA pool */ >> - segment = xilinx_cdma_alloc_tx_segment(chan); >> - if (!segment) >> - goto error; >> - >> - dma_dst = sg_dma_address(dst_sg) + sg_dma_len(dst_sg) - >> - dst_avail; >> - dma_src = sg_dma_address(src_sg) + sg_dma_len(src_sg) - >> - src_avail; >> - hw = &segment->hw; >> - hw->control = len; >> - hw->src_addr = dma_src; >> - hw->dest_addr = dma_dst; >> - if (chan->ext_addr) { >> - hw->src_addr_msb = upper_32_bits(dma_src); >> - hw->dest_addr_msb = upper_32_bits(dma_dst); >> - } >> - >> - if (prev) { >> - prev->hw.next_desc = segment->phys; >> - if (chan->ext_addr) >> - prev->hw.next_desc_msb = >> - upper_32_bits(segment->phys); >> - } >> - >> - prev = segment; >> - dst_avail -= len; >> - src_avail -= len; >> - list_add_tail(&segment->node, &desc->segments); >> - >> -fetch: >> - /* Fetch the next dst scatterlist entry */ >> - if (dst_avail == 0) { >> - if (dst_sg_len == 0) >> - break; >> - dst_sg = sg_next(dst_sg); >> - if (dst_sg == NULL) >> - break; >> - dst_sg_len--; >> - dst_avail = sg_dma_len(dst_sg); >> - } >> - /* Fetch the next src scatterlist entry */ >> - if (src_avail == 0) { >> - if (src_sg_len == 0) >> - break; >> - src_sg = sg_next(src_sg); >> - if (src_sg == NULL) >> - break; >> - src_sg_len--; >> - src_avail = sg_dma_len(src_sg); >> - } >> - } >> - >> - if (list_empty(&desc->segments)) { >> - dev_err(chan->xdev->dev, >> - "%s: Zero-size SG transfer requested\n", __func__); >> - goto error; >> - } >> - >> - /* Link the last hardware descriptor with the first. */ >> - segment = list_first_entry(&desc->segments, >> - struct xilinx_cdma_tx_segment, node); >> - desc->async_tx.phys = segment->phys; >> - prev->hw.next_desc = segment->phys; >> - >> - return &desc->async_tx; >> - >> -error: >> - xilinx_dma_free_tx_descriptor(chan, desc); >> - return NULL; >> -} >> - >> /** >> * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction >> * @dchan: DMA channel >> @@ -3240,9 +3120,7 @@ static int xilinx_dma_probe(struct platform_device *pdev) >> DMA_RESIDUE_GRANULARITY_SEGMENT; >> } else if (xdev->dma_config->dmatype == XDMA_TYPE_CDMA) { >> dma_cap_set(DMA_MEMCPY, xdev->common.cap_mask); >> - dma_cap_set(DMA_MEMCPY_SG, xdev->common.cap_mask); >> xdev->common.device_prep_dma_memcpy = xilinx_cdma_prep_memcpy; >> - xdev->common.device_prep_dma_memcpy_sg = xilinx_cdma_prep_memcpy_sg; >> /* Residue calculation is supported by only AXI DMA and CDMA */ >> xdev->common.residue_granularity = >> DMA_RESIDUE_GRANULARITY_SEGMENT; >> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h >> index b46b88e6aa0d1..c923f4e60f240 100644 >> --- a/include/linux/dmaengine.h >> +++ b/include/linux/dmaengine.h >> @@ -50,7 +50,6 @@ enum dma_status { >> */ >> enum dma_transaction_type { >> DMA_MEMCPY, >> - DMA_MEMCPY_SG, >> DMA_XOR, >> DMA_PQ, >> DMA_XOR_VAL, >> @@ -887,11 +886,6 @@ struct dma_device { >> struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)( >> struct dma_chan *chan, dma_addr_t dst, dma_addr_t src, >> size_t len, unsigned long flags); >> - struct dma_async_tx_descriptor *(*device_prep_dma_memcpy_sg)( >> - struct dma_chan *chan, >> - struct scatterlist *dst_sg, unsigned int dst_nents, >> - struct scatterlist *src_sg, unsigned int src_nents, >> - unsigned long flags); >> struct dma_async_tx_descriptor *(*device_prep_dma_xor)( >> struct dma_chan *chan, dma_addr_t dst, dma_addr_t *src, >> unsigned int src_cnt, size_t len, unsigned long flags); >> @@ -1060,20 +1054,6 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memcpy( >> len, flags); >> } >> >> -static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memcpy_sg( >> - struct dma_chan *chan, >> - struct scatterlist *dst_sg, unsigned int dst_nents, >> - struct scatterlist *src_sg, unsigned int src_nents, >> - unsigned long flags) >> -{ >> - if (!chan || !chan->device || !chan->device->device_prep_dma_memcpy_sg) >> - return NULL; >> - >> - return chan->device->device_prep_dma_memcpy_sg(chan, dst_sg, dst_nents, >> - src_sg, src_nents, >> - flags); >> -} >> - >> static inline bool dmaengine_is_metadata_mode_supported(struct dma_chan *chan, >> enum dma_desc_metadata_mode mode) >> { >> -- >> 2.30.2 > >-- >~Vinod