Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp3717787pxu; Sun, 11 Oct 2020 21:49:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyiURAQ63fS9qGqJ2f5AHmdFeiA1NzRlfkqQVgJPUQQ9HiXeVCcvjAxkgoflPnzWMZhsU9/ X-Received: by 2002:aa7:c451:: with SMTP id n17mr11835731edr.266.1602478168322; Sun, 11 Oct 2020 21:49:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602478168; cv=none; d=google.com; s=arc-20160816; b=hqYZALLhSCLb1mqQpunwua9z9AtwUK5CQDx0ugoiEzze/5awgbAjMCc3H9rJzJ0BGh 8ezGjSOinFiyWiId6CK5MCqUVlznzBoLChdhgy4djrf14ulKF/ueEnwPzGTgWGADDPRt baRBr/PF1dJMihdaT7lttQKJbLuAxhf6XSScf8JjzUKNnGQt1D66DYS92jiGnj1IdByR FIWYxupVs6XcckwESxyLfPvH5Aycq2OG8o30e9F02ma6u819HPRz1QS4aaQm3pcV8Yzm //PR1WUexpSaiHolA3tVntNvEeqxtEQ0xVpL40OFiHQ0LU/Mn2haHDdNKorK/RnyGEMT 0eDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=J1dvQ5v3i3MtHuSw5KVdtBqGSWGz0ODdD3YI59L6fYs=; b=jvfZmK0SVIwA0DQ6JDeK82ghaaRsWhLOrO5JuQtrP4j88FIUHGUj9f0uNXZWogFuOf CjYRbXZxDZSOubPJ2k1a2vW8sdbPCCIUL9R924oONCRqClT2FbPSPPZ8OGCjE/Z+XR2m uH3Gu0cHNdFMC7ccODjWGjFnkeSeWAIZ2dZYg/xo80D9Q/42D/xkU14Xob2539USXB3Y gJ+QXBC3cDIhjAB2JMwJ5QMw4qaLz9DgzYuF7SfCVjceaBrY6VSY+X1L4KTq+KUmSc8I osf3ndLCclEHPTy+iuDHCW5sG3SLn3jgdahk63wSFVquKK8/5OcWtzshE8p+Mc+2tDjU cvCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g9si11574849edr.404.2020.10.11.21.49.05; Sun, 11 Oct 2020 21:49:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726742AbgJLEjs (ORCPT + 99 others); Mon, 12 Oct 2020 00:39:48 -0400 Received: from mga14.intel.com ([192.55.52.115]:21626 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726461AbgJLEjd (ORCPT ); Mon, 12 Oct 2020 00:39:33 -0400 IronPort-SDR: C/Cp4bVIJtYhwz+kV24W/B6H6tY7aNUkhZ8jqCMmt9cWudtCisBy8fOT50Ixk7Qg3zESHVqNk/ uspNSVQn+VBQ== X-IronPort-AV: E=McAfee;i="6000,8403,9771"; a="164903198" X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; d="scan'208";a="164903198" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 21:39:31 -0700 IronPort-SDR: 84bIOb9DVxY1oydbIm52UxACwFRkVPlYhCthP6Apgn7FzFzMyKUr1PJQha8nN9zzqDtoXjBPCA cd/GBXjmSlpw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; d="scan'208";a="313321386" Received: from unknown (HELO jsia-HP-Z620-Workstation.png.intel.com) ([10.221.118.135]) by orsmga003.jf.intel.com with ESMTP; 11 Oct 2020 21:39:29 -0700 From: Sia Jee Heng To: vkoul@kernel.org, Eugeniy.Paltsev@synopsys.com Cc: andriy.shevchenko@linux.intel.com, dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 14/15] dmaengine: dw-axi-dmac: Add Intel KeemBay AxiDMA BYTE and HALFWORD registers Date: Mon, 12 Oct 2020 12:21:59 +0800 Message-Id: <20201012042200.29787-15-jee.heng.sia@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20201012042200.29787-1-jee.heng.sia@intel.com> References: <20201012042200.29787-1-jee.heng.sia@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support for Intel KeemBay AxiDMA BYTE and HALFWORD registers programming. Intel KeemBay AxiDMA supports data transfer between device to memory and memory to device operations. This code is needed by I2C, I3C, I2S, SPI and UART which uses FIFO size of 8bits and 16bits to perform memory to device data transfer operation. 0-padding functionality is provided to avoid pre-processing of data on CPU. Reviewed-by: Andy Shevchenko Signed-off-by: Sia Jee Heng --- .../dma/dw-axi-dmac/dw-axi-dmac-platform.c | 44 ++++++++++++++++--- 1 file changed, 39 insertions(+), 5 deletions(-) diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c index 0f40b41fd5c0..d4fca3ffe67f 100644 --- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c +++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c @@ -312,7 +312,7 @@ static void axi_chan_block_xfer_start(struct axi_dma_chan *chan, struct axi_dma_desc *first) { u32 priority = chan->chip->dw->hdata->priority[chan->id]; - u32 reg, irq_mask; + u32 reg, irq_mask, reg_width, offset, val; u8 lms = 0; /* Select AXI0 master for LLI fetching */ if (unlikely(axi_chan_is_hw_enable(chan))) { @@ -334,6 +334,25 @@ static void axi_chan_block_xfer_start(struct axi_dma_chan *chan, DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_SRC_POS); switch (chan->direction) { case DMA_MEM_TO_DEV: + if (chan->chip->apb_regs) { + reg_width = __ffs(chan->config.dst_addr_width); + /* + * Configure Byte and Halfword register + * for MEM_TO_DEV only. + */ + if (reg_width == DWAXIDMAC_TRANS_WIDTH_16) { + offset = DMAC_APB_HALFWORD_WR_CH_EN; + val = ioread32(chan->chip->apb_regs + offset); + val |= BIT(chan->id); + iowrite32(val, chan->chip->apb_regs + offset); + } else if (reg_width == DWAXIDMAC_TRANS_WIDTH_8) { + offset = DMAC_APB_BYTE_WR_CH_EN; + val = ioread32(chan->chip->apb_regs + offset); + val |= BIT(chan->id); + iowrite32(val, chan->chip->apb_regs + offset); + } + } + reg |= (chan->config.device_fc ? DWAXIDMAC_TT_FC_MEM_TO_PER_DST : DWAXIDMAC_TT_FC_MEM_TO_PER_DMAC) @@ -1054,8 +1073,9 @@ static int dma_chan_terminate_all(struct dma_chan *dchan) { struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); u32 chan_active = BIT(chan->id) << DMAC_CHAN_EN_SHIFT; + u32 reg_width = __ffs(chan->config.dst_addr_width); unsigned long flags; - u32 val; + u32 offset, val; int ret; LIST_HEAD(head); @@ -1067,9 +1087,23 @@ static int dma_chan_terminate_all(struct dma_chan *dchan) dev_warn(dchan2dev(dchan), "%s failed to stop\n", axi_chan_name(chan)); - if (chan->direction != DMA_MEM_TO_MEM) - dw_axi_dma_set_hw_channel(chan->chip, - chan->hw_hs_num, false); + if (chan->direction != DMA_MEM_TO_MEM) { + ret = dw_axi_dma_set_hw_channel(chan->chip, + chan->hw_hs_num, false); + if (ret == 0 && chan->direction == DMA_MEM_TO_DEV) { + if (reg_width == DWAXIDMAC_TRANS_WIDTH_8) { + offset = DMAC_APB_BYTE_WR_CH_EN; + val = ioread32(chan->chip->apb_regs + offset); + val &= ~BIT(chan->id); + iowrite32(val, chan->chip->apb_regs + offset); + } else if (reg_width == DWAXIDMAC_TRANS_WIDTH_16) { + offset = DMAC_APB_HALFWORD_WR_CH_EN; + val = ioread32(chan->chip->apb_regs + offset); + val &= ~BIT(chan->id); + iowrite32(val, chan->chip->apb_regs + offset); + } + } + } spin_lock_irqsave(&chan->vc.lock, flags); -- 2.18.0