Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp731040rdb; Fri, 22 Dec 2023 03:28:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IEBUOABA0WkUh2CB/WjD5HWX3XNub2TOd7r+NZzde3xAwgSj9VZ4qen6SvBi76cpe7XFCl9 X-Received: by 2002:a05:6a20:5a13:b0:194:38c1:51f2 with SMTP id jz19-20020a056a205a1300b0019438c151f2mr2079097pzb.59.1703244505039; Fri, 22 Dec 2023 03:28:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703244505; cv=none; d=google.com; s=arc-20160816; b=L1k6NoFsEeAayXF5OmOhhrRgu2ObuINkZ7unv0rid/Sgzlwdr94nH56aHGW2GKLq8f Thv14ZBTKy0ZQaB1fFBxJ2Em9tludICdGwqpxSnNf2iSpgprKqw6/73GjNhhZqttTamz Lm2EwM4DWjJUwPfTOtxJjBHTOVtzoanBalchtYzAfYafhp06b68Q/vpfh/d+lczFN8Cl vPPaRtI3EO9IDA7toCuiblvJ7VU2ZF4xDgdrRkXqOfTUqtQVDT6LmQ9DxVP547zjD3nP HIu9uurOMsjbouGnkTu3mdU6C4pUkqcFPdgtT8BHeIkJ+IETvy1wS3IJGalFAqsBiW1N EfYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :message-id:date:subject:cc:to:from; bh=YioXYMgtXfTsWRySQ5CYwn86o+LTOMwTlQUGv6bVyPI=; fh=zgzHV65DKnrUpgHiselshH9xv6nmPvca/3SIMVtE2OM=; b=tssiMAeEjNPnL39cDU8SURlKGv88BBRz7xtrL476PphHfHYY+2epbNvUWQhjP7Pm0+ TT9e00rHoJ0voCNBhaA/b9n5w4Pz6kyOBDlaSOIr3odpgr7Vxy8+RGmMlXFruzuQrc0C SRa1qGy98Zbqao5hqDz/IhUNj9PHgC/rZuaN92jgCxE60tZ5Ql6G8KTgloBF7Q4E1XWi dUr5hR19mf6VaSpEONG5iw9W/6UHahPDCMwS9irK/Sj1aOYpN8kfq4uFAxuYNlCoA0O1 eNG09HRMy2cZ/60SfHgqizDdzl27/yWU1IWAqdy3/dQmXH3SCWzN3uR6nQBVo/W4kxbu Coew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-9661-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9661-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id b8-20020a63d808000000b005c680fbab19si3103789pgh.513.2023.12.22.03.28.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 03:28:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-9661-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-9661-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9661-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 5BBB328791C for ; Fri, 22 Dec 2023 11:28:24 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 48A2918031; Fri, 22 Dec 2023 11:28:10 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05A0C1773B; Fri, 22 Dec 2023 11:28:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=unisoc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=unisoc.com Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 3BMBRo8b015216; Fri, 22 Dec 2023 19:27:50 +0800 (+08) (envelope-from Kaiwei.Liu@unisoc.com) Received: from SHDLP.spreadtrum.com (shmbx07.spreadtrum.com [10.0.1.12]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4SxPy30mynz2QTSsZ; Fri, 22 Dec 2023 19:21:31 +0800 (CST) Received: from xm9614pcu.spreadtrum.com (10.13.2.29) by shmbx07.spreadtrum.com (10.0.1.12) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Fri, 22 Dec 2023 19:27:48 +0800 From: Kaiwei Liu To: Vinod Koul , Orson Zhai , Baolin Wang , Chunyan Zhang CC: , , kaiwei liu , Wenming Wu Subject: [PATCH V2 2/2] dmaengine: sprd: optimize two stage transfer function Date: Fri, 22 Dec 2023 19:27:46 +0800 Message-ID: <20231222112746.9720-1-kaiwei.liu@unisoc.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To shmbx07.spreadtrum.com (10.0.1.12) X-MAIL:SHSQR01.spreadtrum.com 3BMBRo8b015216 From: "kaiwei.liu" For SPRD DMA, it provides a function that one channel can start the second channel after completing the transmission, which we call two stage transfer mode. You can choose which channel can generate interrupt when finished. It can support up to two sets of such patterns. When configuring registers for two stage transfer mode, we need to set the mask bit to ensure that the setting are accurate. And we should clear the two stage transfer configuration when release DMA channel. The two stage transfer function is mainly used by SPRD audio, and now audio also requires that the data need to be accessed on the device side. So here use the src_port_window_size and dst_port_win- dow_size in the struct of dma_slave_config. Signed-off-by: kaiwei.liu --- Change in V2 -change because [PATCH 1/2] --- drivers/dma/sprd-dma.c | 116 ++++++++++++++++++++++++----------------- 1 file changed, 69 insertions(+), 47 deletions(-) diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c index cb48731d70b2..e9e113142fd2 100644 --- a/drivers/dma/sprd-dma.c +++ b/drivers/dma/sprd-dma.c @@ -68,6 +68,7 @@ #define SPRD_DMA_GLB_TRANS_DONE_TRG BIT(18) #define SPRD_DMA_GLB_BLOCK_DONE_TRG BIT(17) #define SPRD_DMA_GLB_FRAG_DONE_TRG BIT(16) +#define SPRD_DMA_GLB_TRG_MASK GENMASK(19, 16) #define SPRD_DMA_GLB_TRG_OFFSET 16 #define SPRD_DMA_GLB_DEST_CHN_MASK GENMASK(13, 8) #define SPRD_DMA_GLB_DEST_CHN_OFFSET 8 @@ -155,6 +156,13 @@ #define SPRD_DMA_SOFTWARE_UID 0 +#define SPRD_DMA_SRC_CHN0_INT 9 +#define SPRD_DMA_SRC_CHN1_INT 10 +#define SPRD_DMA_DST_CHN0_INT 11 +#define SPRD_DMA_DST_CHN1_INT 12 +#define SPRD_DMA_2STAGE_SET 1 +#define SPRD_DMA_2STAGE_CLEAR 0 + /* dma data width values */ enum sprd_dma_datawidth { SPRD_DMA_DATAWIDTH_1_BYTE, @@ -431,53 +439,57 @@ static enum sprd_dma_req_mode sprd_dma_get_req_type(struct sprd_dma_chn *schan) return (frag_reg >> SPRD_DMA_REQ_MODE_OFFSET) & SPRD_DMA_REQ_MODE_MASK; } -static int sprd_dma_set_2stage_config(struct sprd_dma_chn *schan) +static void sprd_dma_2stage_write(struct sprd_dma_chn *schan, + u32 config_type, u32 grp_offset) { struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan); - u32 val, chn = schan->chn_num + 1; - - switch (schan->chn_mode) { - case SPRD_DMA_SRC_CHN0: - val = chn & SPRD_DMA_GLB_SRC_CHN_MASK; - val |= BIT(schan->trg_mode - 1) << SPRD_DMA_GLB_TRG_OFFSET; - val |= SPRD_DMA_GLB_2STAGE_EN; - if (schan->int_type != SPRD_DMA_NO_INT) - val |= SPRD_DMA_GLB_SRC_INT; - - sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP1, val, val); - break; - - case SPRD_DMA_SRC_CHN1: - val = chn & SPRD_DMA_GLB_SRC_CHN_MASK; - val |= BIT(schan->trg_mode - 1) << SPRD_DMA_GLB_TRG_OFFSET; - val |= SPRD_DMA_GLB_2STAGE_EN; - if (schan->int_type != SPRD_DMA_NO_INT) - val |= SPRD_DMA_GLB_SRC_INT; - - sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP2, val, val); - break; - - case SPRD_DMA_DST_CHN0: - val = (chn << SPRD_DMA_GLB_DEST_CHN_OFFSET) & - SPRD_DMA_GLB_DEST_CHN_MASK; - val |= SPRD_DMA_GLB_2STAGE_EN; - if (schan->int_type != SPRD_DMA_NO_INT) - val |= SPRD_DMA_GLB_DEST_INT; - - sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP1, val, val); - break; - - case SPRD_DMA_DST_CHN1: - val = (chn << SPRD_DMA_GLB_DEST_CHN_OFFSET) & - SPRD_DMA_GLB_DEST_CHN_MASK; - val |= SPRD_DMA_GLB_2STAGE_EN; - if (schan->int_type != SPRD_DMA_NO_INT) - val |= SPRD_DMA_GLB_DEST_INT; - - sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP2, val, val); - break; + u32 mask_val; + u32 chn = schan->chn_num + 1; + u32 val = 0; + + if (config_type == SPRD_DMA_2STAGE_SET) { + if (schan->chn_mode == SPRD_DMA_SRC_CHN0 || + schan->chn_mode == SPRD_DMA_SRC_CHN1) { + val = chn & SPRD_DMA_GLB_SRC_CHN_MASK; + val |= BIT(schan->trg_mode - 1) << SPRD_DMA_GLB_TRG_OFFSET; + val |= SPRD_DMA_GLB_2STAGE_EN; + if (schan->int_type & SPRD_DMA_SRC_CHN0_INT || + schan->int_type & SPRD_DMA_SRC_CHN1_INT) + val |= SPRD_DMA_GLB_SRC_INT; + mask_val = SPRD_DMA_GLB_SRC_INT | SPRD_DMA_GLB_TRG_MASK | + SPRD_DMA_GLB_SRC_CHN_MASK; + } else { + val = (chn << SPRD_DMA_GLB_DEST_CHN_OFFSET) & + SPRD_DMA_GLB_DEST_CHN_MASK; + val |= SPRD_DMA_GLB_2STAGE_EN; + if (schan->int_type & SPRD_DMA_DST_CHN0_INT || + schan->int_type & SPRD_DMA_DST_CHN1_INT) + val |= SPRD_DMA_GLB_DEST_INT; + mask_val = SPRD_DMA_GLB_DEST_INT | SPRD_DMA_GLB_DEST_CHN_MASK; + } + } else { + if (schan->chn_mode == SPRD_DMA_SRC_CHN0 || + schan->chn_mode == SPRD_DMA_SRC_CHN1) + mask_val = SPRD_DMA_GLB_SRC_INT | SPRD_DMA_GLB_TRG_MASK | + SPRD_DMA_GLB_2STAGE_EN | SPRD_DMA_GLB_SRC_CHN_MASK; + else + mask_val = SPRD_DMA_GLB_DEST_INT | SPRD_DMA_GLB_2STAGE_EN | + SPRD_DMA_GLB_DEST_CHN_MASK; + } + sprd_dma_glb_update(sdev, grp_offset, mask_val, val); +} - default: +static int sprd_dma_2stage_config(struct sprd_dma_chn *schan, u32 config_type) +{ + struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan); + + if (schan->chn_mode == SPRD_DMA_SRC_CHN0 || + schan->chn_mode == SPRD_DMA_DST_CHN0) + sprd_dma_2stage_write(schan, config_type, SPRD_DMA_GLB_2STAGE_GRP1); + else if (schan->chn_mode == SPRD_DMA_SRC_CHN1 || + schan->chn_mode == SPRD_DMA_DST_CHN1) + sprd_dma_2stage_write(schan, config_type, SPRD_DMA_GLB_2STAGE_GRP2); + else { dev_err(sdev->dma_dev.dev, "invalid channel mode setting %d\n", schan->chn_mode); return -EINVAL; @@ -545,7 +557,7 @@ static void sprd_dma_start(struct sprd_dma_chn *schan) * Set 2-stage configuration if the channel starts one 2-stage * transfer. */ - if (schan->chn_mode && sprd_dma_set_2stage_config(schan)) + if (schan->chn_mode && sprd_dma_2stage_config(schan, SPRD_DMA_2STAGE_SET)) return; /* @@ -569,6 +581,12 @@ static void sprd_dma_stop(struct sprd_dma_chn *schan) sprd_dma_set_pending(schan, false); sprd_dma_unset_uid(schan); sprd_dma_clear_int(schan); + /* + * If 2-stage transfer is used, the configuration must be clear + * when release DMA channel. + */ + if (schan->chn_mode) + sprd_dma_2stage_config(schan, SPRD_DMA_2STAGE_CLEAR); schan->cur_desc = NULL; } @@ -757,7 +775,9 @@ static int sprd_dma_fill_desc(struct dma_chan *chan, phys_addr_t llist_ptr; if (dir == DMA_MEM_TO_DEV) { - src_step = sprd_dma_get_step(slave_cfg->src_addr_width); + src_step = slave_cfg->src_port_window_size ? + slave_cfg->src_port_window_size : + sprd_dma_get_step(slave_cfg->src_addr_width); if (src_step < 0) { dev_err(sdev->dma_dev.dev, "invalid source step\n"); return src_step; @@ -773,7 +793,9 @@ static int sprd_dma_fill_desc(struct dma_chan *chan, else dst_step = SPRD_DMA_NONE_STEP; } else { - dst_step = sprd_dma_get_step(slave_cfg->dst_addr_width); + dst_step = slave_cfg->dst_port_window_size ? + slave_cfg->dst_port_window_size : + sprd_dma_get_step(slave_cfg->dst_addr_width); if (dst_step < 0) { dev_err(sdev->dma_dev.dev, "invalid destination step\n"); return dst_step; -- 2.17.1