Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1721718yba; Thu, 4 Apr 2019 17:16:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqw/kw4BoUHD/EHEnNjPVpFxsdfKeYtHkgU9gqb/ds6zXwmB2YVwi8y3ASIM7AQ/A/Ww30TD X-Received: by 2002:aa7:8c13:: with SMTP id c19mr8924665pfd.225.1554423411205; Thu, 04 Apr 2019 17:16:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554423411; cv=none; d=google.com; s=arc-20160816; b=UcsC0DgW4Flc2me356GJrpKJpvG68qHhUm9OHNPiuR/mmoYQ/KzABDFi5QkLtg3fZZ g/YERJNKivINrzs1bswr281eZ2qrSe/KiagEWQ8CFGXhf/zaQY7FnGiTmVTx+vW84g4/ mOSSCYvhmAkbFEgZy8i7AieblZOpudxGo9x6w8Fq/9kYf5R2prlWlthGVzyx5GOzLi46 G4AsSBYWAyoIuGqKMmyXT6riLZtHYr7KinQtcMMzwGNdPkoygForWbCtfrpVI1GTuHo6 mil97AUCIm9Jeb9CpgiR6o74V1EfsuVJJcXyWlIcrxo+cZ3ae7rD2LY5zOl2+VURKsNB Agug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=TSR+Dtj1OoRLyLgROmfmHpZsg87LkRsvNG88ODH6hG8=; b=ijktPSpm1jk+cLKB3wO3vUT/NwCq0Bac9tBfUS3bvApn1tGFT44QtMcXuGvEFlj04z j3x4ZlKXE4M1w5q9rVfXSryV2yNzQXa/DPOESTbeatoXUhx6sXdFDrSfdgQ6Yp+nJao2 aAoAcnHYI5ChCiDdsOsNLg4enPXhlv4fVbslc5J6AULknfe8RqPDW+ZrFOPGeL4iGTKV +1NPePxzelL2RgVdOaX+y7ElhP68ArwidEpvvnzc+QLuaATmnB8lhLdlaFw2bJ7gcifb okx8y/bELtY3KCpmhAZB31crFDm9t4xz7V3u2DNBeVLRius4GCHn32aCh7wbHJGSMBpp QVsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=MEiD34Af; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g20si18145096pfg.207.2019.04.04.17.16.35; Thu, 04 Apr 2019 17:16:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=MEiD34Af; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731114AbfDEAPM (ORCPT + 99 others); Thu, 4 Apr 2019 20:15:12 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:6474 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730755AbfDEAOi (ORCPT ); Thu, 4 Apr 2019 20:14:38 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 04 Apr 2019 17:14:41 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 04 Apr 2019 17:14:36 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 04 Apr 2019 17:14:36 -0700 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 5 Apr 2019 00:14:36 +0000 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 5 Apr 2019 00:14:36 +0000 Received: from skomatineni-linux.nvidia.com (Not Verified[10.110.103.48]) by hqnvemgw02.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 04 Apr 2019 17:14:35 -0700 From: Sowjanya Komatineni To: , , , , , , , CC: , , , , Subject: [PATCH V2 18/20] spi: tegra114: add support for HW CS timing Date: Thu, 4 Apr 2019 17:14:17 -0700 Message-ID: <1554423259-26056-18-git-send-email-skomatineni@nvidia.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1554423259-26056-1-git-send-email-skomatineni@nvidia.com> References: <1554423259-26056-1-git-send-email-skomatineni@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1554423281; bh=TSR+Dtj1OoRLyLgROmfmHpZsg87LkRsvNG88ODH6hG8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=MEiD34Afdn5I5U83xUn9ySZ1mRNNw+UrZX99ujRwrphWjZjdZAkMD502QqUWvg+B7 cY+jpGz/qgheZ9UbQxT2TTI57nK4Bp6OnfSrk25SgBDVjNC+35xoVUqw+VFzFHnXoz tVX7MaNwPJcma4cIS3NbWPXU0a1C1VxxFxTEumfsz/fYZ1L8BQfiXFW5TusMepQiiN osKT19LO9Lbzo0ehICxCJg1FjW3xBL5mB8TAQ0yDXb6e6bi2c7ZWlSc7ALs577cySb seqGGwOCfwV4T6RAhd9U5Sl39eCEDRq6eSAoZ5vqEmFtAWNT0MKUFBzLDm3n+e8cn3 z2QKBlYr7vO4g== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch implements set_cs_timing SPI controller method to allow SPI client driver to configure device specific SPI CS timings. Signed-off-by: Sowjanya Komatineni --- drivers/spi/spi-tegra114.c | 48 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 46 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c index 5cc347b345b1..34dee28554ef 100644 --- a/drivers/spi/spi-tegra114.c +++ b/drivers/spi/spi-tegra114.c @@ -96,8 +96,10 @@ (reg = (((val) & 0x1) << ((cs) * 8 + 5)) | \ ((reg) & ~(1 << ((cs) * 8 + 5)))) #define SPI_SET_CYCLES_BETWEEN_PACKETS(reg, cs, val) \ - (reg = (((val) & 0xF) << ((cs) * 8)) | \ - ((reg) & ~(0xF << ((cs) * 8)))) + (reg = (((val) & 0x1F) << ((cs) * 8)) | \ + ((reg) & ~(0x1F << ((cs) * 8)))) +#define MAX_SETUP_HOLD_CYCLES 16 +#define MAX_INACTIVE_CYCLES 32 #define SPI_TRANS_STATUS 0x010 #define SPI_BLK_CNT(val) (((val) >> 0) & 0xFFFF) @@ -211,6 +213,8 @@ struct tegra_spi_data { u32 command1_reg; u32 dma_control_reg; u32 def_command1_reg; + u32 spi_cs_timing1; + u32 spi_cs_timing2; struct completion xfer_completion; struct spi_transfer *curr_xfer; @@ -728,6 +732,43 @@ static void tegra_spi_deinit_dma_param(struct tegra_spi_data *tspi, dma_release_channel(dma_chan); } +static void tegra_spi_set_hw_cs_timing(struct spi_device *spi, u8 setup_dly, + u8 hold_dly, u8 inactive_dly) +{ + struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master); + u32 setup_hold; + u32 spi_cs_timing; + u32 inactive_cycles; + u8 cs_state; + + setup_dly = min_t(u8, setup_dly, MAX_SETUP_HOLD_CYCLES); + hold_dly = min_t(u8, hold_dly, MAX_SETUP_HOLD_CYCLES); + if (setup_dly && hold_dly) { + setup_hold = SPI_SETUP_HOLD(setup_dly - 1, hold_dly - 1); + spi_cs_timing = SPI_CS_SETUP_HOLD(tspi->spi_cs_timing1, + spi->chip_select, + setup_hold); + if (tspi->spi_cs_timing1 != spi_cs_timing) { + tspi->spi_cs_timing1 = spi_cs_timing; + tegra_spi_writel(tspi, spi_cs_timing, SPI_CS_TIMING1); + } + } + + inactive_cycles = min_t(u8, inactive_dly, MAX_INACTIVE_CYCLES); + if (inactive_cycles) + inactive_cycles--; + cs_state = inactive_cycles ? 0 : 1; + spi_cs_timing = tspi->spi_cs_timing2; + SPI_SET_CS_ACTIVE_BETWEEN_PACKETS(spi_cs_timing, spi->chip_select, + cs_state); + SPI_SET_CYCLES_BETWEEN_PACKETS(spi_cs_timing, spi->chip_select, + inactive_cycles); + if (tspi->spi_cs_timing2 != spi_cs_timing) { + tspi->spi_cs_timing2 = spi_cs_timing; + tegra_spi_writel(tspi, spi_cs_timing, SPI_CS_TIMING2); + } +} + static u32 tegra_spi_setup_transfer_one(struct spi_device *spi, struct spi_transfer *t, bool is_first_of_msg, bool is_single_xfer) @@ -1283,6 +1324,7 @@ static int tegra_spi_probe(struct platform_device *pdev) master->setup = tegra_spi_setup; master->cleanup = tegra_spi_cleanup; master->transfer_one_message = tegra_spi_transfer_one_message; + master->set_cs_timing = tegra_spi_set_hw_cs_timing; master->num_chipselect = MAX_CHIP_SELECT; master->auto_runtime_pm = true; bus_num = of_alias_get_id(pdev->dev.of_node, "spi"); @@ -1358,6 +1400,8 @@ static int tegra_spi_probe(struct platform_device *pdev) reset_control_deassert(tspi->rst); tspi->def_command1_reg = SPI_M_S; tegra_spi_writel(tspi, tspi->def_command1_reg, SPI_COMMAND1); + tspi->spi_cs_timing1 = tegra_spi_readl(tspi, SPI_CS_TIMING1); + tspi->spi_cs_timing2 = tegra_spi_readl(tspi, SPI_CS_TIMING2); pm_runtime_put(&pdev->dev); ret = request_threaded_irq(tspi->irq, tegra_spi_isr, tegra_spi_isr_thread, IRQF_ONESHOT, -- 2.7.4